task e2e-llm-inference-service has failed: "step-fail-if-needed" exited with code 1: Error [get-kubeconfig] Found kubeconfig secret: cluster-886sz-admin-kubeconfig [get-kubeconfig] Wrote kubeconfig to /credentials/cluster-886sz-kubeconfig [get-kubeconfig] Found admin password secret: cluster-886sz-admin-password [get-kubeconfig] Retrieved username [get-kubeconfig] Wrote password to /credentials/cluster-886sz-password [get-kubeconfig] API Server URL: https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443 [get-kubeconfig] Console URL: https://console-openshift-console.apps.2b2ba84f-8fa8-4283-8703-51a68b63f80d.prod.konfluxeaas.com [clone-repo] 260424_sync_upstream [clone-repo] https://github.com/vivekk16/kserve [clone-repo] Cloning into '/workspace/source'... [e2e-llm-inference-service] + bash [e2e-llm-inference-service] + STATUS_FILE=/test-status/deploy-and-e2e-status [e2e-llm-inference-service] + echo failed [e2e-llm-inference-service] + COMPONENT_NAME=kserve-agent-ci [e2e-llm-inference-service] ++ jq -r --arg component_name kserve-agent-ci '.[$component_name].image' [e2e-llm-inference-service] + export KSERVE_AGENT_IMAGE=quay.io/opendatahub/kserve-agent@sha256:d8f23d76f2d9886b9b5bc998ebdde5dc6c174340fd697aa08b5f7fbfe1b78fbb [e2e-llm-inference-service] + KSERVE_AGENT_IMAGE=quay.io/opendatahub/kserve-agent@sha256:d8f23d76f2d9886b9b5bc998ebdde5dc6c174340fd697aa08b5f7fbfe1b78fbb [e2e-llm-inference-service] + COMPONENT_NAME=kserve-controller-ci [e2e-llm-inference-service] ++ jq -r --arg component_name kserve-controller-ci '.[$component_name].image' [e2e-llm-inference-service] + export KSERVE_CONTROLLER_IMAGE=quay.io/opendatahub/kserve-controller@sha256:17948f44080970bb86aa62595512dc5a4d1c952cb593206525ba3e962c50167d [e2e-llm-inference-service] + KSERVE_CONTROLLER_IMAGE=quay.io/opendatahub/kserve-controller@sha256:17948f44080970bb86aa62595512dc5a4d1c952cb593206525ba3e962c50167d [e2e-llm-inference-service] + COMPONENT_NAME=kserve-router-ci [e2e-llm-inference-service] ++ jq -r --arg component_name kserve-router-ci '.[$component_name].image' [e2e-llm-inference-service] + export KSERVE_ROUTER_IMAGE=quay.io/opendatahub/kserve-router@sha256:434eda131522047dad0943830e77efdaea314155166dd4fe016a4bb35d654538 [e2e-llm-inference-service] + KSERVE_ROUTER_IMAGE=quay.io/opendatahub/kserve-router@sha256:434eda131522047dad0943830e77efdaea314155166dd4fe016a4bb35d654538 [e2e-llm-inference-service] + COMPONENT_NAME=kserve-storage-initializer-ci [e2e-llm-inference-service] ++ jq -r --arg component_name kserve-storage-initializer-ci '.[$component_name].image' [e2e-llm-inference-service] + export STORAGE_INITIALIZER_IMAGE=quay.io/opendatahub/kserve-storage-initializer@sha256:0183e14c0f4ec83676f700a2c092ab4e3599fa115c6cab4546b23e5207166882 [e2e-llm-inference-service] + STORAGE_INITIALIZER_IMAGE=quay.io/opendatahub/kserve-storage-initializer@sha256:0183e14c0f4ec83676f700a2c092ab4e3599fa115c6cab4546b23e5207166882 [e2e-llm-inference-service] + COMPONENT_NAME=odh-kserve-llmisvc-controller-ci [e2e-llm-inference-service] ++ jq -r --arg component_name odh-kserve-llmisvc-controller-ci '.[$component_name].image' [e2e-llm-inference-service] + export LLMISVC_CONTROLLER_IMAGE=quay.io/opendatahub/odh-kserve-llmisvc-controller@sha256:4652b4fca0c82d368884d41620d5c4832fece9b8ec9e821cf1c018cb1820482c [e2e-llm-inference-service] + LLMISVC_CONTROLLER_IMAGE=quay.io/opendatahub/odh-kserve-llmisvc-controller@sha256:4652b4fca0c82d368884d41620d5c4832fece9b8ec9e821cf1c018cb1820482c [e2e-llm-inference-service] + ./test/scripts/openshift-ci/run-e2e-tests.sh 'llminferenceservice and cluster_cpu' 2 llm-d [e2e-llm-inference-service] Installing on cluster [e2e-llm-inference-service] Using namespace: kserve for KServe components [e2e-llm-inference-service] SKLEARN_IMAGE=quay.io/opendatahub/sklearn-serving-runtime:odh-pr-1445 [e2e-llm-inference-service] KSERVE_CONTROLLER_IMAGE=quay.io/opendatahub/kserve-controller@sha256:17948f44080970bb86aa62595512dc5a4d1c952cb593206525ba3e962c50167d [e2e-llm-inference-service] LLMISVC_CONTROLLER_IMAGE=quay.io/opendatahub/odh-kserve-llmisvc-controller@sha256:4652b4fca0c82d368884d41620d5c4832fece9b8ec9e821cf1c018cb1820482c [e2e-llm-inference-service] KSERVE_AGENT_IMAGE=quay.io/opendatahub/kserve-agent@sha256:d8f23d76f2d9886b9b5bc998ebdde5dc6c174340fd697aa08b5f7fbfe1b78fbb [e2e-llm-inference-service] KSERVE_ROUTER_IMAGE=quay.io/opendatahub/kserve-router@sha256:434eda131522047dad0943830e77efdaea314155166dd4fe016a4bb35d654538 [e2e-llm-inference-service] STORAGE_INITIALIZER_IMAGE=quay.io/opendatahub/kserve-storage-initializer@sha256:0183e14c0f4ec83676f700a2c092ab4e3599fa115c6cab4546b23e5207166882 [e2e-llm-inference-service] ERROR_404_ISVC_IMAGE=quay.io/opendatahub/error-404-isvc:odh-pr-1445 [e2e-llm-inference-service] SUCCESS_200_ISVC_IMAGE=quay.io/opendatahub/success-200-isvc:odh-pr-1445 [e2e-llm-inference-service] [INFO] Installing Kustomize v5.8.1 for linux/amd64... [e2e-llm-inference-service] [SUCCESS] Successfully installed Kustomize v5.8.1 to /workspace/source/bin/kustomize [e2e-llm-inference-service] v5.8.1 [e2e-llm-inference-service] make: Entering directory '/workspace/source' [e2e-llm-inference-service] [INFO] Installing yq v4.52.1 for linux/amd64... [e2e-llm-inference-service] [SUCCESS] Successfully installed yq v4.52.1 to /workspace/source/bin/yq [e2e-llm-inference-service] yq (https://github.com/mikefarah/yq/) version v4.52.1 [e2e-llm-inference-service] make: Leaving directory '/workspace/source' [e2e-llm-inference-service] Installing KServe Python SDK ... [e2e-llm-inference-service] [INFO] Installing uv 0.7.8 for linux/amd64... [e2e-llm-inference-service] [SUCCESS] Successfully installed uv 0.7.8 to /workspace/source/bin/uv [e2e-llm-inference-service] warning: Failed to read project metadata (No `pyproject.toml` found in current directory or any parent directory). Running `uv self version` for compatibility. This fallback will be removed in the future; pass `--preview` to force an error. [e2e-llm-inference-service] uv 0.7.8 [e2e-llm-inference-service] Creating virtual environment... [e2e-llm-inference-service] warning: virtualenv's `--clear` has no effect (uv always clears the virtual environment) [e2e-llm-inference-service] Using CPython 3.9.25 interpreter at: /usr/bin/python3 [e2e-llm-inference-service] Creating virtual environment at: .venv [e2e-llm-inference-service] /workspace/source [e2e-llm-inference-service] Using CPython 3.11.13 interpreter at: /usr/bin/python3.11 [e2e-llm-inference-service] Creating virtual environment at: .venv [e2e-llm-inference-service] Resolved 259 packages in 1ms [e2e-llm-inference-service] Building kserve @ file:///workspace/source/python/kserve [e2e-llm-inference-service] Downloading pandas (12.5MiB) [e2e-llm-inference-service] Downloading aiohttp (1.7MiB) [e2e-llm-inference-service] Downloading cryptography (4.3MiB) [e2e-llm-inference-service] Downloading kubernetes (1.9MiB) [e2e-llm-inference-service] Downloading portforward (3.9MiB) [e2e-llm-inference-service] Downloading mypy (17.2MiB) [e2e-llm-inference-service] Downloading setuptools (1.2MiB) [e2e-llm-inference-service] Downloading botocore (12.9MiB) [e2e-llm-inference-service] Downloading pydantic-core (2.0MiB) [e2e-llm-inference-service] Downloading grpcio-tools (2.5MiB) [e2e-llm-inference-service] Downloading grpcio (6.4MiB) [e2e-llm-inference-service] Downloading pyarrow (40.1MiB) [e2e-llm-inference-service] Downloading black (1.6MiB) [e2e-llm-inference-service] Downloading uvloop (3.8MiB) [e2e-llm-inference-service] Downloading numpy (15.7MiB) [e2e-llm-inference-service] Building timeout-sampler==1.0.3 [e2e-llm-inference-service] Building python-simple-logger==2.0.19 [e2e-llm-inference-service] Building pyasn==1.6.2 [e2e-llm-inference-service] Downloading aiohttp [e2e-llm-inference-service] Downloading black [e2e-llm-inference-service] Downloading pydantic-core [e2e-llm-inference-service] Downloading grpcio-tools [e2e-llm-inference-service] Downloading setuptools [e2e-llm-inference-service] Downloading portforward [e2e-llm-inference-service] Downloading uvloop [e2e-llm-inference-service] Built python-simple-logger==2.0.19 [e2e-llm-inference-service] Downloading cryptography [e2e-llm-inference-service] Downloading grpcio [e2e-llm-inference-service] Downloading kubernetes [e2e-llm-inference-service] Built timeout-sampler==1.0.3 [e2e-llm-inference-service] Built kserve @ file:///workspace/source/python/kserve [e2e-llm-inference-service] Downloading numpy [e2e-llm-inference-service] Built pyasn==1.6.2 [e2e-llm-inference-service] Downloading pandas [e2e-llm-inference-service] Downloading botocore [e2e-llm-inference-service] Downloading pyarrow [e2e-llm-inference-service] Downloading mypy [e2e-llm-inference-service] Prepared 100 packages in 1.85s [e2e-llm-inference-service] warning: Failed to hardlink files; falling back to full copy. This may lead to degraded performance. [e2e-llm-inference-service] If the cache and target directories are on different filesystems, hardlinking may not be supported. [e2e-llm-inference-service] If this is intentional, set `export UV_LINK_MODE=copy` or use `--link-mode=copy` to suppress this warning. [e2e-llm-inference-service] Installed 100 packages in 278ms [e2e-llm-inference-service] + aiohappyeyeballs==2.6.1 [e2e-llm-inference-service] + aiohttp==3.13.3 [e2e-llm-inference-service] + aiosignal==1.4.0 [e2e-llm-inference-service] + annotated-doc==0.0.4 [e2e-llm-inference-service] + annotated-types==0.7.0 [e2e-llm-inference-service] + anyio==4.9.0 [e2e-llm-inference-service] + attrs==25.3.0 [e2e-llm-inference-service] + avro==1.12.0 [e2e-llm-inference-service] + black==24.3.0 [e2e-llm-inference-service] + boto3==1.37.35 [e2e-llm-inference-service] + botocore==1.37.35 [e2e-llm-inference-service] + cachetools==5.5.2 [e2e-llm-inference-service] + certifi==2025.1.31 [e2e-llm-inference-service] + cffi==2.0.0 [e2e-llm-inference-service] + charset-normalizer==3.4.1 [e2e-llm-inference-service] + click==8.1.8 [e2e-llm-inference-service] + cloudevents==1.11.0 [e2e-llm-inference-service] + colorama==0.4.6 [e2e-llm-inference-service] + colorlog==6.10.1 [e2e-llm-inference-service] + coverage==7.8.0 [e2e-llm-inference-service] + cryptography==46.0.5 [e2e-llm-inference-service] + deprecation==2.1.0 [e2e-llm-inference-service] + durationpy==0.9 [e2e-llm-inference-service] + execnet==2.1.1 [e2e-llm-inference-service] + fastapi==0.121.3 [e2e-llm-inference-service] + frozenlist==1.5.0 [e2e-llm-inference-service] + google-auth==2.39.0 [e2e-llm-inference-service] + grpc-interceptor==0.15.4 [e2e-llm-inference-service] + grpcio==1.78.1 [e2e-llm-inference-service] + grpcio-testing==1.78.1 [e2e-llm-inference-service] + grpcio-tools==1.78.1 [e2e-llm-inference-service] + h11==0.16.0 [e2e-llm-inference-service] + httpcore==1.0.9 [e2e-llm-inference-service] + httptools==0.6.4 [e2e-llm-inference-service] + httpx==0.27.2 [e2e-llm-inference-service] + httpx-retries==0.4.5 [e2e-llm-inference-service] + idna==3.10 [e2e-llm-inference-service] + iniconfig==2.1.0 [e2e-llm-inference-service] + jinja2==3.1.6 [e2e-llm-inference-service] + jmespath==1.0.1 [e2e-llm-inference-service] + kserve==0.18.0rc1 (from file:///workspace/source/python/kserve) [e2e-llm-inference-service] + kubernetes==32.0.1 [e2e-llm-inference-service] + markupsafe==3.0.2 [e2e-llm-inference-service] + multidict==6.4.3 [e2e-llm-inference-service] + mypy==0.991 [e2e-llm-inference-service] + mypy-extensions==1.0.0 [e2e-llm-inference-service] + numpy==2.2.4 [e2e-llm-inference-service] + oauthlib==3.2.2 [e2e-llm-inference-service] + orjson==3.10.16 [e2e-llm-inference-service] + packaging==24.2 [e2e-llm-inference-service] + pandas==2.2.3 [e2e-llm-inference-service] + pathspec==0.12.1 [e2e-llm-inference-service] + platformdirs==4.3.7 [e2e-llm-inference-service] + pluggy==1.5.0 [e2e-llm-inference-service] + portforward==0.7.1 [e2e-llm-inference-service] + prometheus-client==0.21.1 [e2e-llm-inference-service] + propcache==0.3.1 [e2e-llm-inference-service] + protobuf==6.33.5 [e2e-llm-inference-service] + psutil==5.9.8 [e2e-llm-inference-service] + pyarrow==19.0.1 [e2e-llm-inference-service] + pyasn==1.6.2 [e2e-llm-inference-service] + pyasn1==0.6.1 [e2e-llm-inference-service] + pyasn1-modules==0.4.2 [e2e-llm-inference-service] + pycparser==2.22 [e2e-llm-inference-service] + pydantic==2.12.4 [e2e-llm-inference-service] + pydantic-core==2.41.5 [e2e-llm-inference-service] + pyjwt==2.12.1 [e2e-llm-inference-service] + pytest==7.4.4 [e2e-llm-inference-service] + pytest-asyncio==0.23.8 [e2e-llm-inference-service] + pytest-cov==5.0.0 [e2e-llm-inference-service] + pytest-httpx==0.30.0 [e2e-llm-inference-service] + pytest-xdist==3.6.1 [e2e-llm-inference-service] + python-dateutil==2.9.0.post0 [e2e-llm-inference-service] + python-dotenv==1.1.0 [e2e-llm-inference-service] + python-multipart==0.0.22 [e2e-llm-inference-service] + python-simple-logger==2.0.19 [e2e-llm-inference-service] + pytz==2025.2 [e2e-llm-inference-service] + pyyaml==6.0.2 [e2e-llm-inference-service] + requests==2.32.3 [e2e-llm-inference-service] + requests-oauthlib==2.0.0 [e2e-llm-inference-service] + rsa==4.9.1 [e2e-llm-inference-service] + s3transfer==0.11.4 [e2e-llm-inference-service] + setuptools==78.1.0 [e2e-llm-inference-service] + six==1.17.0 [e2e-llm-inference-service] + sniffio==1.3.1 [e2e-llm-inference-service] + starlette==0.49.1 [e2e-llm-inference-service] + tabulate==0.9.0 [e2e-llm-inference-service] + timeout-sampler==1.0.3 [e2e-llm-inference-service] + timing-asgi==0.3.1 [e2e-llm-inference-service] + tomlkit==0.13.2 [e2e-llm-inference-service] + typing-extensions==4.15.0 [e2e-llm-inference-service] + typing-inspection==0.4.2 [e2e-llm-inference-service] + tzdata==2025.2 [e2e-llm-inference-service] + urllib3==2.6.2 [e2e-llm-inference-service] + uvicorn==0.34.1 [e2e-llm-inference-service] + uvloop==0.21.0 [e2e-llm-inference-service] + watchfiles==1.0.5 [e2e-llm-inference-service] + websocket-client==1.8.0 [e2e-llm-inference-service] + websockets==15.0.1 [e2e-llm-inference-service] + yarl==1.20.0 [e2e-llm-inference-service] Audited 1 package in 47ms [e2e-llm-inference-service] /workspace/source [e2e-llm-inference-service] Creating namespace openshift-keda... [e2e-llm-inference-service] namespace/openshift-keda created [e2e-llm-inference-service] Namespace openshift-keda created/ensured. [e2e-llm-inference-service] --- [e2e-llm-inference-service] Creating OperatorGroup openshift-keda... [e2e-llm-inference-service] operatorgroup.operators.coreos.com/openshift-keda created [e2e-llm-inference-service] OperatorGroup openshift-keda created/ensured. [e2e-llm-inference-service] --- [e2e-llm-inference-service] Creating Subscription for openshift-custom-metrics-autoscaler-operator... [e2e-llm-inference-service] subscription.operators.coreos.com/openshift-custom-metrics-autoscaler-operator created [e2e-llm-inference-service] Subscription openshift-custom-metrics-autoscaler-operator created/ensured. [e2e-llm-inference-service] --- [e2e-llm-inference-service] Waiting for openshift-custom-metrics-autoscaler-operator CSV to become ready... [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (0/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (5/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (10/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (15/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (20/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-custom-metrics-autoscaler-operator... (25/600) [e2e-llm-inference-service] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (30/600) [e2e-llm-inference-service] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (35/600) [e2e-llm-inference-service] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (40/600) [e2e-llm-inference-service] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (45/600) [e2e-llm-inference-service] CSV custom-metrics-autoscaler.v2.18.1-2 found, but not yet Succeeded (Phase: Installing). Waiting... (50/600) [e2e-llm-inference-service] CSV custom-metrics-autoscaler.v2.18.1-2 is ready (Phase: Succeeded). [e2e-llm-inference-service] --- [e2e-llm-inference-service] Applying KedaController custom resource... [e2e-llm-inference-service] Warning: resource kedacontrollers/keda is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. [e2e-llm-inference-service] kedacontroller.keda.sh/keda configured [e2e-llm-inference-service] KedaController custom resource applied. [e2e-llm-inference-service] --- [e2e-llm-inference-service] Allowing time for KEDA components to be provisioned by the operator ... [e2e-llm-inference-service] Waiting for KEDA Operator pod (selector: "app=keda-operator") to be ready in namespace openshift-keda... [e2e-llm-inference-service] Waiting for pod -l "app=keda-operator" in namespace "openshift-keda" to be created... [e2e-llm-inference-service] Pod -l "app=keda-operator" in namespace "openshift-keda" found. [e2e-llm-inference-service] Current pods for -l "app=keda-operator" in namespace "openshift-keda": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] keda-operator-ffbb595cb-5fk89 1/1 Running 0 42s [e2e-llm-inference-service] Waiting up to 120s for pod(s) -l "app=keda-operator" in namespace "openshift-keda" to become ready... [e2e-llm-inference-service] pod/keda-operator-ffbb595cb-5fk89 condition met [e2e-llm-inference-service] Pod(s) -l "app=keda-operator" in namespace "openshift-keda" are ready. [e2e-llm-inference-service] KEDA Operator pod is ready. [e2e-llm-inference-service] Waiting for KEDA Metrics API Server pod (selector: "app=keda-metrics-apiserver") to be ready in namespace openshift-keda... [e2e-llm-inference-service] Waiting for pod -l "app=keda-metrics-apiserver" in namespace "openshift-keda" to be created... [e2e-llm-inference-service] Pod -l "app=keda-metrics-apiserver" in namespace "openshift-keda" found. [e2e-llm-inference-service] Current pods for -l "app=keda-metrics-apiserver" in namespace "openshift-keda": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] keda-metrics-apiserver-7c9f485588-fq9d9 1/1 Running 0 48s [e2e-llm-inference-service] Waiting up to 120s for pod(s) -l "app=keda-metrics-apiserver" in namespace "openshift-keda" to become ready... [e2e-llm-inference-service] pod/keda-metrics-apiserver-7c9f485588-fq9d9 condition met [e2e-llm-inference-service] Pod(s) -l "app=keda-metrics-apiserver" in namespace "openshift-keda" are ready. [e2e-llm-inference-service] KEDA Metrics API Server pod is ready. [e2e-llm-inference-service] Waiting for KEDA Webhook pod (selector: "app=keda-admission-webhooks") to be ready in namespace openshift-keda... [e2e-llm-inference-service] Waiting for pod -l "app=keda-admission-webhooks" in namespace "openshift-keda" to be created... [e2e-llm-inference-service] Pod -l "app=keda-admission-webhooks" in namespace "openshift-keda" found. [e2e-llm-inference-service] Current pods for -l "app=keda-admission-webhooks" in namespace "openshift-keda": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] keda-admission-cf49989db-hcqzw 1/1 Running 0 52s [e2e-llm-inference-service] Waiting up to 120s for pod(s) -l "app=keda-admission-webhooks" in namespace "openshift-keda" to become ready... [e2e-llm-inference-service] pod/keda-admission-cf49989db-hcqzw condition met [e2e-llm-inference-service] Pod(s) -l "app=keda-admission-webhooks" in namespace "openshift-keda" are ready. [e2e-llm-inference-service] KEDA Webhook pod is ready. [e2e-llm-inference-service] --- [e2e-llm-inference-service] ✅ KEDA deployment script finished successfully. [e2e-llm-inference-service] 🔧 Configuration: [e2e-llm-inference-service] KServe deployment: ❌ disabled [e2e-llm-inference-service] Kuadrant deployment: ✅ enabled [e2e-llm-inference-service] [e2e-llm-inference-service] Checking OpenShift server version...(4.20.19) [e2e-llm-inference-service] 🎯 Server version (4.20.19) is 4.19.9 or higher - continue with the script [e2e-llm-inference-service] ⏳ Installing cert-manager [e2e-llm-inference-service] namespace/cert-manager-operator created [e2e-llm-inference-service] operatorgroup.operators.coreos.com/cert-manager-operator created [e2e-llm-inference-service] subscription.operators.coreos.com/openshift-cert-manager-operator created [e2e-llm-inference-service] Waiting for openshift-cert-manager-operator CSV to become ready... [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-cert-manager-operator... (0/300) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-cert-manager-operator... (5/300) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription openshift-cert-manager-operator... (10/300) [e2e-llm-inference-service] CSV cert-manager-operator.v1.19.0 found, but not yet Succeeded (Phase: Installing). Waiting... (15/300) [e2e-llm-inference-service] CSV cert-manager-operator.v1.19.0 is ready (Phase: Succeeded). [e2e-llm-inference-service] Waiting for CRD certificates.cert-manager.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD certificates.cert-manager.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io condition met [e2e-llm-inference-service] ✅ Cert-manager installed [e2e-llm-inference-service] ⏳ Installing openshift-lws-operator [e2e-llm-inference-service] namespace/openshift-lws-operator created [e2e-llm-inference-service] operatorgroup.operators.coreos.com/leader-worker-set created [e2e-llm-inference-service] subscription.operators.coreos.com/leader-worker-set created [e2e-llm-inference-service] Waiting for leader-worker-set CSV to become ready... [e2e-llm-inference-service] Waiting for CSV to be installed for subscription leader-worker-set... (0/300) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription leader-worker-set... (5/300) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription leader-worker-set... (10/300) [e2e-llm-inference-service] CSV leader-worker-set.v1.0.0 found, but not yet Succeeded (Phase: Installing). Waiting... (15/300) [e2e-llm-inference-service] CSV leader-worker-set.v1.0.0 is ready (Phase: Succeeded). [e2e-llm-inference-service] Waiting for CRD leaderworkersetoperators.operator.openshift.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD leaderworkersetoperators.operator.openshift.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/leaderworkersetoperators.operator.openshift.io condition met [e2e-llm-inference-service] leaderworkersetoperator.operator.openshift.io/cluster created [e2e-llm-inference-service] ⏳ waiting for openshift-lws-operator to be ready.… [e2e-llm-inference-service] Waiting for pod -l "name=openshift-lws-operator" in namespace "openshift-lws-operator" to be created... [e2e-llm-inference-service] Pod -l "name=openshift-lws-operator" in namespace "openshift-lws-operator" found. [e2e-llm-inference-service] Current pods for -l "name=openshift-lws-operator" in namespace "openshift-lws-operator": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] openshift-lws-operator-bfc7f696d-7j5vf 1/1 Running 0 13s [e2e-llm-inference-service] Waiting up to 600s for pod(s) -l "name=openshift-lws-operator" in namespace "openshift-lws-operator" to become ready... [e2e-llm-inference-service] pod/openshift-lws-operator-bfc7f696d-7j5vf condition met [e2e-llm-inference-service] Pod(s) -l "name=openshift-lws-operator" in namespace "openshift-lws-operator" are ready. [e2e-llm-inference-service] ✅ openshift-lws-operator installed [e2e-llm-inference-service] gatewayclass.gateway.networking.k8s.io/openshift-default created [e2e-llm-inference-service] Waiting for pod -l "app=istiod" in namespace "openshift-ingress" to be created... [e2e-llm-inference-service] Pod -l "app=istiod" in namespace "openshift-ingress" found. [e2e-llm-inference-service] Current pods for -l "app=istiod" in namespace "openshift-ingress": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] istiod-openshift-gateway-7cd77c7ffd-hfqv4 1/1 Running 0 5s [e2e-llm-inference-service] Waiting up to 600s for pod(s) -l "app=istiod" in namespace "openshift-ingress" to become ready... [e2e-llm-inference-service] pod/istiod-openshift-gateway-7cd77c7ffd-hfqv4 condition met [e2e-llm-inference-service] Pod(s) -l "app=istiod" in namespace "openshift-ingress" are ready. [e2e-llm-inference-service] ⏳ Creating a Gateway [e2e-llm-inference-service] Error from server (AlreadyExists): namespaces "openshift-ingress" already exists [e2e-llm-inference-service] gateway.gateway.networking.k8s.io/openshift-ai-inference created [e2e-llm-inference-service] Waiting for pod -l "serving.kserve.io/gateway=kserve-ingress-gateway" in namespace "openshift-ingress" to be created... [e2e-llm-inference-service] Pod -l "serving.kserve.io/gateway=kserve-ingress-gateway" in namespace "openshift-ingress" found. [e2e-llm-inference-service] Current pods for -l "serving.kserve.io/gateway=kserve-ingress-gateway" in namespace "openshift-ingress": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] openshift-ai-inference-openshift-default-6b94bb86d8-pkcwx 1/1 Running 0 5s [e2e-llm-inference-service] Waiting up to 600s for pod(s) -l "serving.kserve.io/gateway=kserve-ingress-gateway" in namespace "openshift-ingress" to become ready... [e2e-llm-inference-service] pod/openshift-ai-inference-openshift-default-6b94bb86d8-pkcwx condition met [e2e-llm-inference-service] Pod(s) -l "serving.kserve.io/gateway=kserve-ingress-gateway" in namespace "openshift-ingress" are ready. [e2e-llm-inference-service] ⏳ Installing RHCL(Kuadrant) operator [e2e-llm-inference-service] namespace/kuadrant-system created [e2e-llm-inference-service] subscription.operators.coreos.com/rhcl-operator created [e2e-llm-inference-service] operatorgroup.operators.coreos.com/kuadrant created [e2e-llm-inference-service] Waiting for rhcl-operator CSV to become ready... [e2e-llm-inference-service] Waiting for CSV to be installed for subscription rhcl-operator... (0/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription rhcl-operator... (5/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription rhcl-operator... (10/600) [e2e-llm-inference-service] Waiting for CSV to be installed for subscription rhcl-operator... (15/600) [e2e-llm-inference-service] CSV rhcl-operator.v1.3.2 found, but not yet Succeeded (Phase: Installing). Waiting... (20/600) [e2e-llm-inference-service] CSV rhcl-operator.v1.3.2 found, but not yet Succeeded (Phase: Installing). Waiting... (25/600) [e2e-llm-inference-service] CSV rhcl-operator.v1.3.2 found, but not yet Succeeded (Phase: Installing). Waiting... (30/600) [e2e-llm-inference-service] CSV rhcl-operator.v1.3.2 is ready (Phase: Succeeded). [e2e-llm-inference-service] Waiting for CRD kuadrants.kuadrant.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD kuadrants.kuadrant.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/kuadrants.kuadrant.io condition met [e2e-llm-inference-service] Waiting for apiserver discovery /apis/kuadrant.io/v1beta1 to list kuadrants (timeout: 120s)… [e2e-llm-inference-service] Discovery for kuadrant.io/v1beta1 includes kuadrants. [e2e-llm-inference-service] ⏳ sleeping 30s after discovery (RESTMapper can trail discovery)… [e2e-llm-inference-service] kuadrant.kuadrant.io/kuadrant created [e2e-llm-inference-service] ⏳ waiting for Kuadrant Ready (attempt 1/2, timeout 5m)… [e2e-llm-inference-service] kuadrant.kuadrant.io/kuadrant condition met [e2e-llm-inference-service] Waiting for pod -l "control-plane=authorino-operator" in namespace "kuadrant-system" to be created... [e2e-llm-inference-service] Pod -l "control-plane=authorino-operator" in namespace "kuadrant-system" found. [e2e-llm-inference-service] Current pods for -l "control-plane=authorino-operator" in namespace "kuadrant-system": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] authorino-operator-7587b89b76-wm84n 1/1 Running 0 66s [e2e-llm-inference-service] Waiting up to 600s for pod(s) -l "control-plane=authorino-operator" in namespace "kuadrant-system" to become ready... [e2e-llm-inference-service] pod/authorino-operator-7587b89b76-wm84n condition met [e2e-llm-inference-service] Pod(s) -l "control-plane=authorino-operator" in namespace "kuadrant-system" are ready. [e2e-llm-inference-service] ⏳ waiting for authorino service to be created... [e2e-llm-inference-service] service/authorino-authorino-authorization condition met [e2e-llm-inference-service] service/authorino-authorino-authorization annotated [e2e-llm-inference-service] Warning: resource authorinos/authorino is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. [e2e-llm-inference-service] authorino.operator.authorino.kuadrant.io/authorino configured [e2e-llm-inference-service] Waiting for pod -l "control-plane=authorino-operator" in namespace "kuadrant-system" to be created... [e2e-llm-inference-service] Pod -l "control-plane=authorino-operator" in namespace "kuadrant-system" found. [e2e-llm-inference-service] Current pods for -l "control-plane=authorino-operator" in namespace "kuadrant-system": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] authorino-operator-7587b89b76-wm84n 1/1 Running 0 75s [e2e-llm-inference-service] Waiting up to 600s for pod(s) -l "control-plane=authorino-operator" in namespace "kuadrant-system" to become ready... [e2e-llm-inference-service] pod/authorino-operator-7587b89b76-wm84n condition met [e2e-llm-inference-service] Pod(s) -l "control-plane=authorino-operator" in namespace "kuadrant-system" are ready. [e2e-llm-inference-service] ✅ kuadrant(authorino) installed [e2e-llm-inference-service] Now using project "kserve" on server "https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443". [e2e-llm-inference-service] [e2e-llm-inference-service] You can add applications to this project with the 'new-app' command. For example, try: [e2e-llm-inference-service] [e2e-llm-inference-service] oc new-app rails-postgresql-example [e2e-llm-inference-service] [e2e-llm-inference-service] to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: [e2e-llm-inference-service] [e2e-llm-inference-service] kubectl create deployment hello-node --image=registry.k8s.io/e2e-test-images/agnhost:2.43 -- /agnhost serve-hostname [e2e-llm-inference-service] [e2e-llm-inference-service] ⏳ Installing KServe with SeaweedFS [e2e-llm-inference-service] # Warning: 'commonLabels' is deprecated. Please use 'labels' instead. Run 'kustomize edit fix' to update your Kustomization automatically. [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/clusterstoragecontainers.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/datascienceclusters.datasciencecluster.opendatahub.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/dscinitializations.dscinitialization.opendatahub.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencegraphs.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencemodelrewrites.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferenceobjectives.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencepoolimports.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/llminferenceserviceconfigs.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/llminferenceservices.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/servingruntimes.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/trainedmodels.serving.kserve.io serverside-applied [e2e-llm-inference-service] ⏳ Waiting for CRDs to be established [e2e-llm-inference-service] Waiting for CRD inferenceservices.serving.kserve.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD inferenceservices.serving.kserve.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io condition met [e2e-llm-inference-service] Waiting for CRD llminferenceserviceconfigs.serving.kserve.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD llminferenceserviceconfigs.serving.kserve.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/llminferenceserviceconfigs.serving.kserve.io condition met [e2e-llm-inference-service] Waiting for CRD clusterstoragecontainers.serving.kserve.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD clusterstoragecontainers.serving.kserve.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/clusterstoragecontainers.serving.kserve.io condition met [e2e-llm-inference-service] Waiting for CRD datascienceclusters.datasciencecluster.opendatahub.io to appear (timeout: 90s)… [e2e-llm-inference-service] CRD datascienceclusters.datasciencecluster.opendatahub.io detected — waiting for it to become Established (timeout: 90s)… [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/datascienceclusters.datasciencecluster.opendatahub.io condition met [e2e-llm-inference-service] ⏳ Applying all resources... [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/clusterstoragecontainers.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/datascienceclusters.datasciencecluster.opendatahub.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/dscinitializations.dscinitialization.opendatahub.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencegraphs.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencemodelrewrites.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferenceobjectives.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencepoolimports.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferencepools.inference.networking.x-k8s.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/inferenceservices.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/llminferenceserviceconfigs.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/llminferenceservices.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/servingruntimes.serving.kserve.io serverside-applied [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/trainedmodels.serving.kserve.io serverside-applied [e2e-llm-inference-service] serviceaccount/kserve-controller-manager serverside-applied [e2e-llm-inference-service] serviceaccount/llmisvc-controller-manager serverside-applied [e2e-llm-inference-service] role.rbac.authorization.k8s.io/kserve-leader-election-role serverside-applied [e2e-llm-inference-service] role.rbac.authorization.k8s.io/llmisvc-leader-election-role serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-admin serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-edit serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-llmisvc-distro-role serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-llmisvc-manager-role serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-manager-role serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-metrics-reader-cluster-role serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-proxy-role serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-view serverside-applied [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/openshift-ai-llminferenceservice-scc serverside-applied [e2e-llm-inference-service] rolebinding.rbac.authorization.k8s.io/kserve-leader-election-rolebinding serverside-applied [e2e-llm-inference-service] rolebinding.rbac.authorization.k8s.io/llmisvc-leader-election-rolebinding serverside-applied [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/kserve-llmisvc-distro-rolebinding serverside-applied [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/kserve-manager-rolebinding serverside-applied [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/kserve-proxy-rolebinding serverside-applied [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/llmisvc-manager-rolebinding serverside-applied [e2e-llm-inference-service] configmap/inferenceservice-config serverside-applied [e2e-llm-inference-service] configmap/kserve-parameters serverside-applied [e2e-llm-inference-service] secret/kserve-webhook-server-secret serverside-applied [e2e-llm-inference-service] secret/mlpipeline-s3-artifact serverside-applied [e2e-llm-inference-service] service/kserve-controller-manager-metrics-service serverside-applied [e2e-llm-inference-service] service/kserve-controller-manager-service serverside-applied [e2e-llm-inference-service] service/kserve-webhook-server-service serverside-applied [e2e-llm-inference-service] service/llmisvc-controller-manager-service serverside-applied [e2e-llm-inference-service] service/llmisvc-webhook-server-service serverside-applied [e2e-llm-inference-service] service/s3-service serverside-applied [e2e-llm-inference-service] deployment.apps/kserve-controller-manager serverside-applied [e2e-llm-inference-service] deployment.apps/llmisvc-controller-manager serverside-applied [e2e-llm-inference-service] deployment.apps/seaweedfs serverside-applied [e2e-llm-inference-service] networkpolicy.networking.k8s.io/kserve-controller-manager serverside-applied [e2e-llm-inference-service] securitycontextconstraints.security.openshift.io/openshift-ai-llminferenceservice-scc serverside-applied [e2e-llm-inference-service] clusterstoragecontainer.serving.kserve.io/default serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-template serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-worker-data-parallel serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-template serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-worker-data-parallel serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-router-route serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-scheduler serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-amd-rocm serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-ibm-spyre-ppc64le serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-ibm-spyre-s390x serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-ibm-spyre-x86 serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-intel-gaudi serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template-nvidia-cuda serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-worker-data-parallel serverside-applied [e2e-llm-inference-service] mutatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kserve.io serverside-applied [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/inferencegraph.serving.kserve.io serverside-applied [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/inferenceservice.serving.kserve.io serverside-applied [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/llminferenceservice.serving.kserve.io serverside-applied [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/llminferenceserviceconfig.serving.kserve.io serverside-applied [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/servingruntime.serving.kserve.io serverside-applied [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/trainedmodel.serving.kserve.io serverside-applied [e2e-llm-inference-service] ⏳ Waiting for llmisvc-controller-manager to be ready... [e2e-llm-inference-service] Waiting for pod -l "control-plane=llmisvc-controller-manager" in namespace "kserve" to be created... [e2e-llm-inference-service] Pod -l "control-plane=llmisvc-controller-manager" in namespace "kserve" found. [e2e-llm-inference-service] Current pods for -l "control-plane=llmisvc-controller-manager" in namespace "kserve": [e2e-llm-inference-service] NAME READY STATUS RESTARTS AGE [e2e-llm-inference-service] llmisvc-controller-manager-6c7fdc754b-b449d 0/1 ContainerCreating 0 5s [e2e-llm-inference-service] Waiting up to 600s for pod(s) -l "control-plane=llmisvc-controller-manager" in namespace "kserve" to become ready... [e2e-llm-inference-service] pod/llmisvc-controller-manager-6c7fdc754b-b449d condition met [e2e-llm-inference-service] Pod(s) -l "control-plane=llmisvc-controller-manager" in namespace "kserve" are ready. [e2e-llm-inference-service] ⏳ Re-applying LLMInferenceServiceConfig resources with webhook validation... [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-decode-template is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-template serverside-applied [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-decode-worker-data-parallel is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-decode-worker-data-parallel serverside-applied [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-prefill-template is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-template serverside-applied [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-prefill-worker-data-parallel is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-prefill-worker-data-parallel serverside-applied [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-router-route serverside-applied [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-scheduler is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-scheduler serverside-applied [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-template is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-template serverside-applied [e2e-llm-inference-service] Warning: modifying well-known config kserve/kserve-config-llm-worker-data-parallel is not recommended. Consider creating a custom config instead [e2e-llm-inference-service] llminferenceserviceconfig.serving.kserve.io/kserve-config-llm-worker-data-parallel serverside-applied [e2e-llm-inference-service] Installing DSC/DSCI resources... [e2e-llm-inference-service] dscinitialization.dscinitialization.opendatahub.io/test-dsci created [e2e-llm-inference-service] datasciencecluster.datasciencecluster.opendatahub.io/test-dsc created [e2e-llm-inference-service] Patching ingress domain, markers: llminferenceservice and cluster_cpu [e2e-llm-inference-service] configmap/inferenceservice-config patched [e2e-llm-inference-service] pod "kserve-controller-manager-8cdbbc8b5-74nv8" deleted [e2e-llm-inference-service] datasciencecluster.datasciencecluster.opendatahub.io/test-dsc patched [e2e-llm-inference-service] waiting kserve-controller get ready... [e2e-llm-inference-service] pod/kserve-controller-manager-8cdbbc8b5-2w9hz condition met [e2e-llm-inference-service] Installing ODH Model Controller manually with PR images [e2e-llm-inference-service] customresourcedefinition.apiextensions.k8s.io/accounts.nim.opendatahub.io created [e2e-llm-inference-service] serviceaccount/model-serving-api created [e2e-llm-inference-service] serviceaccount/odh-model-controller created [e2e-llm-inference-service] role.rbac.authorization.k8s.io/leader-election-role created [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/kserve-prometheus-k8s created [e2e-llm-inference-service] Warning: resource clusterroles/metrics-reader is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically. [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/metrics-reader configured [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/model-serving-api created [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/odh-model-controller-role created [e2e-llm-inference-service] clusterrole.rbac.authorization.k8s.io/proxy-role created [e2e-llm-inference-service] rolebinding.rbac.authorization.k8s.io/leader-election-rolebinding created [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/model-serving-api created [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/odh-model-controller-rolebinding-opendatahub created [e2e-llm-inference-service] clusterrolebinding.rbac.authorization.k8s.io/proxy-rolebinding created [e2e-llm-inference-service] configmap/odh-model-controller-parameters created [e2e-llm-inference-service] service/model-serving-api created [e2e-llm-inference-service] service/odh-model-controller-metrics-service created [e2e-llm-inference-service] service/odh-model-controller-webhook-service created [e2e-llm-inference-service] deployment.apps/model-serving-api created [e2e-llm-inference-service] deployment.apps/odh-model-controller created [e2e-llm-inference-service] servicemonitor.monitoring.coreos.com/model-serving-api-metrics created [e2e-llm-inference-service] servicemonitor.monitoring.coreos.com/odh-model-controller-metrics-monitor created [e2e-llm-inference-service] template.template.openshift.io/guardrails-detector-huggingface-serving-template created [e2e-llm-inference-service] template.template.openshift.io/kserve-ovms created [e2e-llm-inference-service] template.template.openshift.io/mlserver-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-cpu-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-cpu-x86-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-cuda-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-gaudi-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-multinode-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-rocm-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-spyre-ppc64le-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-spyre-s390x-runtime-template created [e2e-llm-inference-service] template.template.openshift.io/vllm-spyre-x86-runtime-template created [e2e-llm-inference-service] mutatingwebhookconfiguration.admissionregistration.k8s.io/mutating.odh-model-controller.opendatahub.io created [e2e-llm-inference-service] validatingwebhookconfiguration.admissionregistration.k8s.io/validating.odh-model-controller.opendatahub.io created [e2e-llm-inference-service] Waiting for deployment "odh-model-controller" rollout to finish: 0 of 1 updated replicas are available... [e2e-llm-inference-service] deployment "odh-model-controller" successfully rolled out [e2e-llm-inference-service] Add testing models to SeaweedFS S3 storage ... [e2e-llm-inference-service] Waiting for SeaweedFS deployment to be ready... [e2e-llm-inference-service] deployment "seaweedfs" successfully rolled out [e2e-llm-inference-service] S3 init job not completed, re-creating... [e2e-llm-inference-service] job.batch/s3-init created [e2e-llm-inference-service] Waiting for S3 init job to complete... [e2e-llm-inference-service] job.batch/s3-init condition met [e2e-llm-inference-service] networkpolicy.networking.k8s.io/allow-all created [e2e-llm-inference-service] Prepare CI namespace and install ServingRuntimes [e2e-llm-inference-service] Setting up CI namespace: kserve-ci-e2e-test [e2e-llm-inference-service] Tearing down CI namespace: kserve-ci-e2e-test [e2e-llm-inference-service] Namespace kserve-ci-e2e-test does not exist, skipping deletion [e2e-llm-inference-service] CI namespace teardown complete [e2e-llm-inference-service] Creating namespace kserve-ci-e2e-test [e2e-llm-inference-service] namespace/kserve-ci-e2e-test created [e2e-llm-inference-service] Applying S3 artifact secret [e2e-llm-inference-service] secret/mlpipeline-s3-artifact created [e2e-llm-inference-service] Applying storage-config secret [e2e-llm-inference-service] secret/storage-config created [e2e-llm-inference-service] Creating odh-trusted-ca-bundle configmap [e2e-llm-inference-service] configmap/odh-trusted-ca-bundle created [e2e-llm-inference-service] Installing ServingRuntimes [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-huggingfaceserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-huggingfaceserver-multinode created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-lgbserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-mlserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-paddleserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-pmmlserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-predictiveserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-sklearnserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-tensorflow-serving created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-torchserve created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-tritonserver created [e2e-llm-inference-service] servingruntime.serving.kserve.io/kserve-xgbserver created [e2e-llm-inference-service] CI namespace setup complete [e2e-llm-inference-service] Setup complete [e2e-llm-inference-service] === E2E cluster / operator summary === [e2e-llm-inference-service] Client Version: 4.20.11 [e2e-llm-inference-service] Kustomize Version: v5.6.0 [e2e-llm-inference-service] Server Version: 4.20.19 [e2e-llm-inference-service] Kubernetes Version: v1.33.9 [e2e-llm-inference-service] ClusterVersion desired: 4.20.19 [e2e-llm-inference-service] ClusterVersion history (latest): 4.20.19 (Completed) [e2e-llm-inference-service] CSVs in kuadrant-system: [e2e-llm-inference-service] authorino-operator.v1.3.0 Succeeded [e2e-llm-inference-service] cert-manager-operator.v1.19.0 Succeeded [e2e-llm-inference-service] dns-operator.v1.3.0 Succeeded [e2e-llm-inference-service] limitador-operator.v1.3.0 Succeeded [e2e-llm-inference-service] rhcl-operator.v1.3.2 Succeeded [e2e-llm-inference-service] servicemeshoperator3.v3.1.0 Succeeded [e2e-llm-inference-service] CSVs in openshift-keda: [e2e-llm-inference-service] authorino-operator.v1.3.0 Succeeded [e2e-llm-inference-service] cert-manager-operator.v1.19.0 Succeeded [e2e-llm-inference-service] custom-metrics-autoscaler.v2.18.1-2 Succeeded [e2e-llm-inference-service] dns-operator.v1.3.0 Succeeded [e2e-llm-inference-service] limitador-operator.v1.3.0 Succeeded [e2e-llm-inference-service] rhcl-operator.v1.3.2 Succeeded [e2e-llm-inference-service] servicemeshoperator3.v3.1.0 Succeeded [e2e-llm-inference-service] CSVs in cert-manager-operator: [e2e-llm-inference-service] authorino-operator.v1.3.0 Succeeded [e2e-llm-inference-service] cert-manager-operator.v1.19.0 Succeeded [e2e-llm-inference-service] dns-operator.v1.3.0 Succeeded [e2e-llm-inference-service] limitador-operator.v1.3.0 Succeeded [e2e-llm-inference-service] rhcl-operator.v1.3.2 Succeeded [e2e-llm-inference-service] servicemeshoperator3.v3.1.0 Succeeded [e2e-llm-inference-service] CSVs in openshift-lws-operator: [e2e-llm-inference-service] authorino-operator.v1.3.0 Succeeded [e2e-llm-inference-service] cert-manager-operator.v1.19.0 Succeeded [e2e-llm-inference-service] dns-operator.v1.3.0 Succeeded [e2e-llm-inference-service] leader-worker-set.v1.0.0 Succeeded [e2e-llm-inference-service] limitador-operator.v1.3.0 Succeeded [e2e-llm-inference-service] rhcl-operator.v1.3.2 Succeeded [e2e-llm-inference-service] servicemeshoperator3.v3.1.0 Succeeded [e2e-llm-inference-service] CSVs in openshift-operators (ODH / shared operators, filtered): [e2e-llm-inference-service] authorino-operator.v1.3.0 Succeeded [e2e-llm-inference-service] dns-operator.v1.3.0 Succeeded [e2e-llm-inference-service] limitador-operator.v1.3.0 Succeeded [e2e-llm-inference-service] rhcl-operator.v1.3.2 Succeeded [e2e-llm-inference-service] servicemeshoperator3.v3.1.0 Succeeded [e2e-llm-inference-service] Kuadrant / Authorino (diagnostics): [e2e-llm-inference-service] CRD kuadrants.kuadrant.io versions: v1beta1 served=true storage=true [e2e-llm-inference-service] Subscriptions in kuadrant-system: [e2e-llm-inference-service] authorino-operator-stable-redhat-operators-openshift-marketplace stable redhat-operators authorino-operator.v1.3.0 [e2e-llm-inference-service] dns-operator-stable-redhat-operators-openshift-marketplace stable redhat-operators dns-operator.v1.3.0 [e2e-llm-inference-service] limitador-operator-stable-redhat-operators-openshift-marketplace stable redhat-operators limitador-operator.v1.3.0 [e2e-llm-inference-service] rhcl-operator stable redhat-operators rhcl-operator.v1.3.2 [e2e-llm-inference-service] Kuadrant CR conditions (kuadrant/kuadrant-system): [e2e-llm-inference-service] Ready=True (Ready) [e2e-llm-inference-service] === End E2E cluster / operator summary === [e2e-llm-inference-service] /workspace/source [e2e-llm-inference-service] REQUESTS_CA_BUNDLE=-----BEGIN CERTIFICATE----- [e2e-llm-inference-service] MIIDPDCCAiSgAwIBAgIIPkn0UL0j8D0wDQYJKoZIhvcNAQELBQAwJjESMBAGA1UE [e2e-llm-inference-service] CxMJb3BlbnNoaWZ0MRAwDgYDVQQDEwdyb290LWNhMB4XDTI2MDQyNDE4NTkzNVoX [e2e-llm-inference-service] DTM2MDQyMTE4NTkzNVowJjESMBAGA1UECxMJb3BlbnNoaWZ0MRAwDgYDVQQDEwdy [e2e-llm-inference-service] b290LWNhMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwRkuJ9wxAoIe [e2e-llm-inference-service] GrpBR5MhxVMHAZtmK23fFNoIRW0ELIXgMpcEjzDkpZyXH1cnDDlScHwKd39QWFVJ [e2e-llm-inference-service] 2YBKEqb8prLRkE0snbdmW5h0stJ8sDusULpglKCMWD8WRsoDn6OWFCTGxZhPYVPA [e2e-llm-inference-service] gu2+5+DNEHGgQb8w+hwDKJ38o0BCDHvQr9dvbO2+7NEB9aJsrkg75k41zx64JmF4 [e2e-llm-inference-service] CjiDKEdlAX/QHfekC+tEPrNceR+iQZeU+453le2n0vznMLBdDjZu7jJUyR6tAjNJ [e2e-llm-inference-service] M8Ha+ZLR/oGFCGwjgsyd4Fl/a6rEOFOVBJp5HEx+2JeyGmE0PtL2K5HuDXKRfhlB [e2e-llm-inference-service] HREPHkd61QIDAQABo24wbDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB [e2e-llm-inference-service] /zBJBgNVHQ4EQgRAGAkOgHJAeVm/gvBxY95Jzf5eydJhomIzmFFF8a/ObzEOCxgg [e2e-llm-inference-service] oJN7XMW+AVGnxy/oV+wwp9R+JsoYYFvzzTZoYjANBgkqhkiG9w0BAQsFAAOCAQEA [e2e-llm-inference-service] XGC2gNYnd32Q7oXD0C13r6cJjbou8bKh8Ut1DsdqcUmPlalLu98WMWw6Us9dVQRF [e2e-llm-inference-service] qkA1cMcqR5sKdzv/ZX/n7CF6KFn8CjgyVjrr6Tu1lkCM62PudjHBQT5ZmBW9KXu3 [e2e-llm-inference-service] EczawIoMxRm+RBvraeDVVuV6Mv5hVLNS2ZpDnMrvrdHy7VWIZvrnU6wXb5n9XABD [e2e-llm-inference-service] gRCXqlqxiR2aXNsE9yuNt89oyfQrGCMvg5Rl8OK2mNVxFdOxXG7xe+Fus37J4AVG [e2e-llm-inference-service] qyaZLSNx81rLIqjxcH5/vdE5h7rYLzb/R+491wPQjyVEKgillDxIeNd7C0Jvk9NF [e2e-llm-inference-service] 0OQPZ+Thv6cdlAlPWMaKOA== [e2e-llm-inference-service] -----END CERTIFICATE----- [e2e-llm-inference-service] -----BEGIN CERTIFICATE----- [e2e-llm-inference-service] MIIEADCCAuigAwIBAgIIL+kYKbMtWfkwDQYJKoZIhvcNAQELBQAwJjESMBAGA1UE [e2e-llm-inference-service] CxMJb3BlbnNoaWZ0MRAwDgYDVQQDEwdyb290LWNhMB4XDTI2MDQyNDE5MDAwOVoX [e2e-llm-inference-service] DTI3MDQyNDE5MDAwOVowMDESMBAGA1UEChMJb3BlbnNoaWZ0MRowGAYDVQQDExFv [e2e-llm-inference-service] cGVuc2hpZnQtaW5ncmVzczCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEB [e2e-llm-inference-service] ANYJCx+kyz0nd4MNB5U45jhKY0ImB7A28F07B+VqQfrrl5+pqfVaTJePvsuha0Ro [e2e-llm-inference-service] Z7CNPs/qkmtTf8VBO/0sOnqQapbiDfJF/N8TzsmcbPrvwz9vuEu60BS/EtTzTboz [e2e-llm-inference-service] 08ZeBZvL2TWWecwoEtwEEmZ/iPtEIeUDeOipI2rMm4V/OvpeEMaMeEpoeUxmraH+ [e2e-llm-inference-service] qK1O7hjyVTmbOAmm/TyfMw01e8zn+9GVv+8v4j1TMExQ5s+U/z1aDx5YJVaKEtYC [e2e-llm-inference-service] hPHNORdrX/2K6xDqBbw8+f8U0ikG+oSN9mN1EH4cJs4zNP90S6nhxBJKNPXozi78 [e2e-llm-inference-service] bJb9xW94vcIgFRe0uOL98fcCAwEAAaOCASYwggEiMA4GA1UdDwEB/wQEAwIFoDAd [e2e-llm-inference-service] BgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwEwDAYDVR0TAQH/BAIwADBJBgNV [e2e-llm-inference-service] HQ4EQgRAUqf16QZcXEXRuxBY9s+sww1f7vca3M699tkccHxhkwzWxcz7IgclzZhP [e2e-llm-inference-service] yIcSRFmlw6sanx42zMcTd/abTA5cPzBLBgNVHSMERDBCgEAYCQ6AckB5Wb+C8HFj [e2e-llm-inference-service] 3knN/l7J0mGiYjOYUUXxr85vMQ4LGCCgk3tcxb4BUafHL+hX7DCn1H4myhhgW/PN [e2e-llm-inference-service] NmhiMEsGA1UdEQREMEKCQCouYXBwcy4yYjJiYTg0Zi04ZmE4LTQyODMtODcwMy01 [e2e-llm-inference-service] MWE2OGI2M2Y4MGQucHJvZC5rb25mbHV4ZWFhcy5jb20wDQYJKoZIhvcNAQELBQAD [e2e-llm-inference-service] ggEBAGDn5nhu+o+3FbXTFUM7lc9TyvGZB+ZLkwDSSuzGFIXxZWxUa2mY3HwJvpJH [e2e-llm-inference-service] 6Jnqvalq0tH1AGGMjdV9WXtv3uryqxgJRaxbNloA2FBeEpUO5MXeWemfIGgb9pMh [e2e-llm-inference-service] ensqbS54qlHXazDwY81xXfmkSTZgnvMMzJbwkXwAaWgXCVXKJztXKdr+l73dgKQO [e2e-llm-inference-service] 9Ee3PKrvfadxUaSqNtk1OC1VdyoKloeYq12XNf4r48DpEEu3CbSTONiHgE2rK8r5 [e2e-llm-inference-service] TyO66t1oNOb4uovcaEjfNwcNrUu0wPIMIjB3Y6AqQxK+zG9WilbcPehgaK4azJKe [e2e-llm-inference-service] N0GI9CrFyeAUdVdtcd1xYP9o6L0= [e2e-llm-inference-service] -----END CERTIFICATE----- [e2e-llm-inference-service] [e2e-llm-inference-service] -----BEGIN CERTIFICATE----- [e2e-llm-inference-service] MIIDUTCCAjmgAwIBAgIIckHRdhVokvswDQYJKoZIhvcNAQELBQAwNjE0MDIGA1UE [e2e-llm-inference-service] Awwrb3BlbnNoaWZ0LXNlcnZpY2Utc2VydmluZy1zaWduZXJAMTc3NzA1NzcxMTAe [e2e-llm-inference-service] Fw0yNjA0MjQxOTA4MzBaFw0yODA2MjIxOTA4MzFaMDYxNDAyBgNVBAMMK29wZW5z [e2e-llm-inference-service] aGlmdC1zZXJ2aWNlLXNlcnZpbmctc2lnbmVyQDE3NzcwNTc3MTEwggEiMA0GCSqG [e2e-llm-inference-service] SIb3DQEBAQUAA4IBDwAwggEKAoIBAQCrCip8Oiq2nknJQWi6mcypszEcWBZVxxmO [e2e-llm-inference-service] 7lG+pBoufLB0Pf4wjflzIcXDyMan/KTy50gCfCv6W6CeiXgkf837XNsTri4/u341 [e2e-llm-inference-service] VAYvJLsjW9xNy6WVLxYUUXkqJM47ZCMRdr2goO0AnFCIA4tqfIGgSCBxGX42I+NA [e2e-llm-inference-service] Ewed8lAukgBrAqRRN6n+5gX6akGG46sP1fHGB8OG0FhXKPMVFx5PWApnmrJ+JQA2 [e2e-llm-inference-service] Mb+ZIIu0qLG3zCJnD13bj42bMdcij0VXDEcD8PQdl76o/Gt3yzqGianMWRXdArEx [e2e-llm-inference-service] 3D7Da7+k4dP331+LOpiUzkgP+R6K24EmgFQ11iP6WVigs+sMlpVLAgMBAAGjYzBh [e2e-llm-inference-service] MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSfJ/Sm [e2e-llm-inference-service] li3oe+e1B6jXVCIUXoG//DAfBgNVHSMEGDAWgBSfJ/Smli3oe+e1B6jXVCIUXoG/ [e2e-llm-inference-service] /DANBgkqhkiG9w0BAQsFAAOCAQEAGDB5iHi/QNA4rOjUJxGN7oEQy3POowWZFY01 [e2e-llm-inference-service] AcTHx0v+UgsI6Lj0mb1u4Nd5a2d879dfw6f5babEjkZIHIFFDYhUD3bT8WrvNSTZ [e2e-llm-inference-service] 4B+IjE1CnZAkxJWBpUYSp7JaaGnjPsuVxbEeLJRXtdJe87C+JcUjTSVsaI2eeKW2 [e2e-llm-inference-service] g4XZDaq6WeugDK32VsCDhsnmMSUARvZ+hlNUaCPaRzoloZIKJe6kNDFLTwlmaMjB [e2e-llm-inference-service] 5yOHLMGB/56VFQa6LEq3vo2znLZjrt7YfwUKInFhuYtxS1GUBQV8VXbi32htLVuQ [e2e-llm-inference-service] WDUEvV03CSc4Io0z84Oo7/glNPhFC7mvy8jnw3i5ePN1s5x1Jw== [e2e-llm-inference-service] -----END CERTIFICATE----- [e2e-llm-inference-service] Run E2E tests: llminferenceservice and cluster_cpu [e2e-llm-inference-service] Starting E2E functional tests ... [e2e-llm-inference-service] Parallelism requested for pytest is 2 [e2e-llm-inference-service] ============================= test session starts ============================== [e2e-llm-inference-service] platform linux -- Python 3.11.13, pytest-7.4.4, pluggy-1.5.0 -- /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] cachedir: .pytest_cache [e2e-llm-inference-service] rootdir: /workspace/source/test/e2e [e2e-llm-inference-service] configfile: pytest.ini [e2e-llm-inference-service] plugins: asyncio-0.23.8, httpx-0.30.0, anyio-4.9.0, xdist-3.6.1, cov-5.0.0 [e2e-llm-inference-service] asyncio: mode=Mode.STRICT [e2e-llm-inference-service] created: 2/2 workers [e2e-llm-inference-service] 2 workers [32 items] [e2e-llm-inference-service] [e2e-llm-inference-service] scheduling tests via WorkStealingScheduling [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-pd-cpu-model-fb-opt-125m] 2026-04-24 19:19:19.892 5291 kserve INFO [conftest.py:configure_logger():40] Logger configured [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_gateway_section_name.py::test_gateway_section_name_propagation[cluster_single_node-cluster_cpu-with-section-name] 2026-04-24 19:19:19.902 5288 kserve INFO [conftest.py:configure_logger():40] Logger configured [e2e-llm-inference-service] 2026-04-24 19:19:19.919 5288 kserve.trace Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:19:19.919 5288 kserve.trace INFO [gw_api.py:create_or_update_gateway():34] Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:19:19.941 5288 kserve.trace Resource not found, creating Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 19:19:19.941 5288 kserve.trace INFO [gw_api.py:create_or_update_gateway():62] Resource not found, creating Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 19:19:19.947 5288 kserve.trace ✓ Successfully created Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 19:19:19.947 5288 kserve.trace INFO [gw_api.py:create_or_update_gateway():70] ✓ Successfully created Gateway router-gateway-1 [e2e-llm-inference-service] [e2e-llm-inference-service] [gw0] PASSED llmisvc/test_gateway_section_name.py::test_gateway_section_name_propagation[cluster_single_node-cluster_cpu-with-section-name] [e2e-llm-inference-service] llmisvc/test_gateway_section_name.py::test_gateway_section_name_propagation[cluster_single_node-cluster_cpu-without-section-name] 2026-04-24 19:20:01.860 5288 kserve.trace Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:20:01.860 5288 kserve.trace INFO [gw_api.py:create_or_update_gateway():34] Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:20:01.901 5288 kserve.trace ✓ Successfully updated Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 19:20:01.901 5288 kserve.trace INFO [gw_api.py:create_or_update_gateway():57] ✓ Successfully updated Gateway router-gateway-1 [e2e-llm-inference-service] [e2e-llm-inference-service] [gw0] PASSED llmisvc/test_gateway_section_name.py::test_gateway_section_name_propagation[cluster_single_node-cluster_cpu-without-section-name] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_hpa_deployment[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-custom-route-timeout-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-custom-route-timeout-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] 2026-04-24 19:26:17.355 5291 kserve.trace Checking Gateway router-gateway-2 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:26:17.355 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():34] Checking Gateway router-gateway-2 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:26:17.401 5291 kserve.trace Resource not found, creating Gateway router-gateway-2 [e2e-llm-inference-service] 2026-04-24 19:26:17.401 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():62] Resource not found, creating Gateway router-gateway-2 [e2e-llm-inference-service] 2026-04-24 19:26:17.412 5291 kserve.trace ✓ Successfully created Gateway router-gateway-2 [e2e-llm-inference-service] 2026-04-24 19:26:17.412 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():70] ✓ Successfully created Gateway router-gateway-2 [e2e-llm-inference-service] 2026-04-24 19:26:17.412 5291 kserve.trace Checking HttpRoute router-route-3 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:26:17.412 5291 kserve.trace INFO [gw_api.py:create_or_update_route():121] Checking HttpRoute router-route-3 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:26:17.425 5291 kserve.trace Resource not found, creating HttpRoute router-route-3 [e2e-llm-inference-service] 2026-04-24 19:26:17.425 5291 kserve.trace INFO [gw_api.py:create_or_update_route():149] Resource not found, creating HttpRoute router-route-3 [e2e-llm-inference-service] 2026-04-24 19:26:17.458 5291 kserve.trace ✓ Successfully created HttpRoute router-route-3 [e2e-llm-inference-service] 2026-04-24 19:26:17.458 5291 kserve.trace INFO [gw_api.py:create_or_update_route():157] ✓ Successfully created HttpRoute router-route-3 [e2e-llm-inference-service] 2026-04-24 19:26:17.458 5291 kserve.trace Checking HttpRoute router-route-4 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:26:17.458 5291 kserve.trace INFO [gw_api.py:create_or_update_route():121] Checking HttpRoute router-route-4 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 19:26:17.467 5291 kserve.trace Resource not found, creating HttpRoute router-route-4 [e2e-llm-inference-service] 2026-04-24 19:26:17.467 5291 kserve.trace INFO [gw_api.py:create_or_update_route():149] Resource not found, creating HttpRoute router-route-4 [e2e-llm-inference-service] 2026-04-24 19:26:17.482 5291 kserve.trace ✓ Successfully created HttpRoute router-route-4 [e2e-llm-inference-service] 2026-04-24 19:26:17.482 5291 kserve.trace INFO [gw_api.py:create_or_update_route():157] ✓ Successfully created HttpRoute router-route-4 [e2e-llm-inference-service] [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_hpa_deployment[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_keda_deployment[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] [gw1] FAILED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-no-scheduler-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-no-scheduler-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_multi_node-router-managed-workload-simulated-dp-ep-cpu-model-fb-opt-125m] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_multi_node-router-managed-workload-simulated-dp-ep-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-inline-config-workload-llmd-simulator] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-inline-config-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-configmap-ref-workload-llmd-simulator] [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_keda_deployment[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_hpa_lws[cluster_cpu-cluster_multi_node-router-managed-workload-llmd-simulator-lws-scaling-hpa] [e2e-llm-inference-service] [gw1] FAILED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-configmap-ref-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-replicas-workload-llmd-simulator] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-replicas-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-precise-prefix-cache-inline-config-workload-llmd-simulator-kvcache] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-precise-prefix-cache-inline-config-workload-llmd-simulator-kvcache] [e2e-llm-inference-service] llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_v1alpha1_to_v1alpha2_conversion [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_v1alpha1_to_v1alpha2_conversion [e2e-llm-inference-service] llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_v1alpha2_to_v1alpha1_conversion [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_v1alpha2_to_v1alpha1_conversion [e2e-llm-inference-service] llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_criticality_preservation_via_annotations [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_criticality_preservation_via_annotations [e2e-llm-inference-service] llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_lora_criticality_preservation [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_lora_criticality_preservation [e2e-llm-inference-service] llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_round_trip_conversion_preserves_fields [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service_conversion.py::TestLLMInferenceServiceConversion::test_round_trip_conversion_preserves_fields [e2e-llm-inference-service] llmisvc/test_llm_inference_service_stop.py::test_llm_stop_feature[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_hpa_lws[cluster_cpu-cluster_multi_node-router-managed-workload-llmd-simulator-lws-scaling-hpa] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_keda_lws[cluster_cpu-cluster_multi_node-router-managed-workload-llmd-simulator-lws-scaling-keda] [e2e-llm-inference-service] [gw1] FAILED llmisvc/test_llm_inference_service_stop.py::test_llm_stop_feature[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_storage_version_migration.py::TestStorageVersionMigration::test_storage_version_migration_after_simulated_upgrade [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_keda_lws[cluster_cpu-cluster_multi_node-router-managed-workload-llmd-simulator-lws-scaling-keda] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_cleanup_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_storage_version_migration.py::TestStorageVersionMigration::test_storage_version_migration_after_simulated_upgrade [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_update_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_cleanup_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_cleanup_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] [gw1] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_update_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-gateway-ref-router-with-managed-route-model-fb-opt-125m-workload-llmd-simulator] 2026-04-24 20:36:40.002 5291 kserve.trace Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:36:40.002 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():34] Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:36:40.052 5291 kserve.trace ✓ Successfully updated Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 20:36:40.052 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():57] ✓ Successfully updated Gateway router-gateway-1 [e2e-llm-inference-service] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-gateway-ref-router-with-managed-route-model-fb-opt-125m-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-custom-route-timeout-scheduler-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-custom-route-timeout-scheduler-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-scheduler-managed-workload-single-cpu-model-fb-opt-125m] 2026-04-24 20:44:47.042 5291 kserve.trace Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:44:47.042 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():34] Checking Gateway router-gateway-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:44:47.084 5291 kserve.trace ✓ Successfully updated Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 20:44:47.084 5291 kserve.trace INFO [gw_api.py:create_or_update_gateway():57] ✓ Successfully updated Gateway router-gateway-1 [e2e-llm-inference-service] 2026-04-24 20:44:47.084 5291 kserve.trace Checking HttpRoute router-route-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:44:47.084 5291 kserve.trace INFO [gw_api.py:create_or_update_route():121] Checking HttpRoute router-route-1 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:44:47.090 5291 kserve.trace Resource not found, creating HttpRoute router-route-1 [e2e-llm-inference-service] 2026-04-24 20:44:47.090 5291 kserve.trace INFO [gw_api.py:create_or_update_route():149] Resource not found, creating HttpRoute router-route-1 [e2e-llm-inference-service] 2026-04-24 20:44:47.100 5291 kserve.trace ✓ Successfully created HttpRoute router-route-1 [e2e-llm-inference-service] 2026-04-24 20:44:47.100 5291 kserve.trace INFO [gw_api.py:create_or_update_route():157] ✓ Successfully created HttpRoute router-route-1 [e2e-llm-inference-service] 2026-04-24 20:44:47.100 5291 kserve.trace Checking HttpRoute router-route-2 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:44:47.100 5291 kserve.trace INFO [gw_api.py:create_or_update_route():121] Checking HttpRoute router-route-2 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] 2026-04-24 20:44:47.105 5291 kserve.trace Resource not found, creating HttpRoute router-route-2 [e2e-llm-inference-service] 2026-04-24 20:44:47.105 5291 kserve.trace INFO [gw_api.py:create_or_update_route():149] Resource not found, creating HttpRoute router-route-2 [e2e-llm-inference-service] 2026-04-24 20:44:47.118 5291 kserve.trace ✓ Successfully created HttpRoute router-route-2 [e2e-llm-inference-service] 2026-04-24 20:44:47.118 5291 kserve.trace INFO [gw_api.py:create_or_update_route():157] ✓ Successfully created HttpRoute router-route-2 [e2e-llm-inference-service] [e2e-llm-inference-service] [gw1] PASSED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-scheduler-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_stop_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_cleanup_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_stop_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] [gw0] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_stop_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] [gw1] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_stop_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_update_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] [gw1] ERROR llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_update_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] [e2e-llm-inference-service] ==================================== ERRORS ==================================== [e2e-llm-inference-service] _ ERROR at setup of test_llm_autoscaling_update_hpa[router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] _ [e2e-llm-inference-service] [gw1] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] llm_config = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test' [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_or_update_llmisvc_config(kserve_client, llm_config, namespace=None): [e2e-llm-inference-service] """Create or update an LLMInferenceServiceConfig resource.""" [e2e-llm-inference-service] version = llm_config["apiVersion"].split("/")[1] [e2e-llm-inference-service] [e2e-llm-inference-service] if namespace is None: [e2e-llm-inference-service] namespace = llm_config.get("metadata", {}).get("namespace", "default") [e2e-llm-inference-service] [e2e-llm-inference-service] name = llm_config.get("metadata", {}).get("name") [e2e-llm-inference-service] if not name: [e2e-llm-inference-service] raise ValueError("LLMInferenceServiceConfig must have a name in metadata") [e2e-llm-inference-service] [e2e-llm-inference-service] logger.info(f"Checking LLMInferenceServiceConfig {name} in namespace {namespace}") [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > existing_config = kserve_client.api_instance.get_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICECONFIG, [e2e-llm-inference-service] name, [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/fixtures.py:1280: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceserviceconfigs' [e2e-llm-inference-service] name = 'router-managed-autoscale-update-4ae0cfce' [e2e-llm-inference-service] kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] [e2e-llm-inference-service] def get_namespaced_custom_object(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-llm-inference-service] """get_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Returns a namespace scoped custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.get_namespaced_custom_object(group, version, namespace, plural, name, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: the custom resource's group (required) [e2e-llm-inference-service] :param str version: the custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param str name: the custom object's name (required) [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: object [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] kwargs['_return_http_data_only'] = True [e2e-llm-inference-service] > return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1632: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceserviceconfigs' [e2e-llm-inference-service] name = 'router-managed-autoscale-update-4ae0cfce' [e2e-llm-inference-service] kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-llm-inference-service] all_params = ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...] [e2e-llm-inference-service] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'name': 'router-managed-autoscale-update-4ae0cfce', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceserviceconfigs', ...} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] def get_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-llm-inference-service] """get_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Returns a namespace scoped custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: the custom resource's group (required) [e2e-llm-inference-service] :param str version: the custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param str name: the custom object's name (required) [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] [e2e-llm-inference-service] local_var_params = locals() [e2e-llm-inference-service] [e2e-llm-inference-service] all_params = [ [e2e-llm-inference-service] 'group', [e2e-llm-inference-service] 'version', [e2e-llm-inference-service] 'namespace', [e2e-llm-inference-service] 'plural', [e2e-llm-inference-service] 'name' [e2e-llm-inference-service] ] [e2e-llm-inference-service] all_params.extend( [e2e-llm-inference-service] [ [e2e-llm-inference-service] 'async_req', [e2e-llm-inference-service] '_return_http_data_only', [e2e-llm-inference-service] '_preload_content', [e2e-llm-inference-service] '_request_timeout' [e2e-llm-inference-service] ] [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-llm-inference-service] if key not in all_params: [e2e-llm-inference-service] raise ApiTypeError( [e2e-llm-inference-service] "Got an unexpected keyword argument '%s'" [e2e-llm-inference-service] " to method get_namespaced_custom_object" % key [e2e-llm-inference-service] ) [e2e-llm-inference-service] local_var_params[key] = val [e2e-llm-inference-service] del local_var_params['kwargs'] [e2e-llm-inference-service] # verify the required parameter 'group' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['group'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `group` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'version' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['version'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `version` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'namespace' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['namespace'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `namespace` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'plural' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['plural'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `plural` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'name' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['name'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `name` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] collection_formats = {} [e2e-llm-inference-service] [e2e-llm-inference-service] path_params = {} [e2e-llm-inference-service] if 'group' in local_var_params: [e2e-llm-inference-service] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-llm-inference-service] if 'version' in local_var_params: [e2e-llm-inference-service] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-llm-inference-service] if 'namespace' in local_var_params: [e2e-llm-inference-service] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-llm-inference-service] if 'plural' in local_var_params: [e2e-llm-inference-service] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-llm-inference-service] if 'name' in local_var_params: [e2e-llm-inference-service] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] header_params = {} [e2e-llm-inference-service] [e2e-llm-inference-service] form_params = [] [e2e-llm-inference-service] local_var_files = {} [e2e-llm-inference-service] [e2e-llm-inference-service] body_params = None [e2e-llm-inference-service] # HTTP header `Accept` [e2e-llm-inference-service] header_params['Accept'] = self.api_client.select_header_accept( [e2e-llm-inference-service] ['application/json']) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] # Authentication setting [e2e-llm-inference-service] auth_settings = ['BearerToken'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] > return self.api_client.call_api( [e2e-llm-inference-service] '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}', 'GET', [e2e-llm-inference-service] path_params, [e2e-llm-inference-service] query_params, [e2e-llm-inference-service] header_params, [e2e-llm-inference-service] body=body_params, [e2e-llm-inference-service] post_params=form_params, [e2e-llm-inference-service] files=local_var_files, [e2e-llm-inference-service] response_type='object', # noqa: E501 [e2e-llm-inference-service] auth_settings=auth_settings, [e2e-llm-inference-service] async_req=local_var_params.get('async_req'), [e2e-llm-inference-service] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-llm-inference-service] _preload_content=local_var_params.get('_preload_content', True), [e2e-llm-inference-service] _request_timeout=local_var_params.get('_request_timeout'), [e2e-llm-inference-service] collection_formats=collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1739: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}' [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'name': 'router-managed-autoscale-update-4ae0cfce', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceserviceconfigs', ...} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def call_api(self, resource_path, method, [e2e-llm-inference-service] path_params=None, query_params=None, header_params=None, [e2e-llm-inference-service] body=None, post_params=None, files=None, [e2e-llm-inference-service] response_type=None, auth_settings=None, async_req=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-llm-inference-service] [e2e-llm-inference-service] To make an async_req request, set the async_req parameter. [e2e-llm-inference-service] [e2e-llm-inference-service] :param resource_path: Path to method endpoint. [e2e-llm-inference-service] :param method: Method to call. [e2e-llm-inference-service] :param path_params: Path parameters in the url. [e2e-llm-inference-service] :param query_params: Query parameters in the url. [e2e-llm-inference-service] :param header_params: Header parameters to be [e2e-llm-inference-service] placed in the request header. [e2e-llm-inference-service] :param body: Request body. [e2e-llm-inference-service] :param post_params dict: Request post form parameters, [e2e-llm-inference-service] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-llm-inference-service] :param auth_settings list: Auth Settings names for the request. [e2e-llm-inference-service] :param response: Response data type. [e2e-llm-inference-service] :param files dict: key -> filename, value -> filepath, [e2e-llm-inference-service] for `multipart/form-data`. [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param collection_formats: dict of collection formats for path, query, [e2e-llm-inference-service] header, and post parameters. [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: [e2e-llm-inference-service] If async_req parameter is True, [e2e-llm-inference-service] the request will be called asynchronously. [e2e-llm-inference-service] The method will return the request thread. [e2e-llm-inference-service] If parameter async_req is False or missing, [e2e-llm-inference-service] then the method will return the response directly. [e2e-llm-inference-service] """ [e2e-llm-inference-service] if not async_req: [e2e-llm-inference-service] > return self.__call_api(resource_path, method, [e2e-llm-inference-service] path_params, query_params, header_params, [e2e-llm-inference-service] body, post_params, files, [e2e-llm-inference-service] response_type, auth_settings, [e2e-llm-inference-service] _return_http_data_only, collection_formats, [e2e-llm-inference-service] _preload_content, _request_timeout, _host) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs/router-managed-autoscale-update-4ae0cfce' [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] path_params = [('group', 'serving.kserve.io'), ('version', 'v1alpha1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'llminferenceserviceconfigs'), ('name', 'router-managed-autoscale-update-4ae0cfce')] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def __call_api( [e2e-llm-inference-service] self, resource_path, method, path_params=None, [e2e-llm-inference-service] query_params=None, header_params=None, body=None, post_params=None, [e2e-llm-inference-service] files=None, response_type=None, auth_settings=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] [e2e-llm-inference-service] config = self.configuration [e2e-llm-inference-service] [e2e-llm-inference-service] # header parameters [e2e-llm-inference-service] header_params = header_params or {} [e2e-llm-inference-service] header_params.update(self.default_headers) [e2e-llm-inference-service] if self.cookie: [e2e-llm-inference-service] header_params['Cookie'] = self.cookie [e2e-llm-inference-service] if header_params: [e2e-llm-inference-service] header_params = self.sanitize_for_serialization(header_params) [e2e-llm-inference-service] header_params = dict(self.parameters_to_tuples(header_params, [e2e-llm-inference-service] collection_formats)) [e2e-llm-inference-service] [e2e-llm-inference-service] # path parameters [e2e-llm-inference-service] if path_params: [e2e-llm-inference-service] path_params = self.sanitize_for_serialization(path_params) [e2e-llm-inference-service] path_params = self.parameters_to_tuples(path_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] for k, v in path_params: [e2e-llm-inference-service] # specified safe chars, encode everything [e2e-llm-inference-service] resource_path = resource_path.replace( [e2e-llm-inference-service] '{%s}' % k, [e2e-llm-inference-service] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] # query parameters [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] query_params = self.sanitize_for_serialization(query_params) [e2e-llm-inference-service] query_params = self.parameters_to_tuples(query_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] # post parameters [e2e-llm-inference-service] if post_params or files: [e2e-llm-inference-service] post_params = post_params if post_params else [] [e2e-llm-inference-service] post_params = self.sanitize_for_serialization(post_params) [e2e-llm-inference-service] post_params = self.parameters_to_tuples(post_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] post_params.extend(self.files_parameters(files)) [e2e-llm-inference-service] [e2e-llm-inference-service] # auth setting [e2e-llm-inference-service] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-llm-inference-service] [e2e-llm-inference-service] # body [e2e-llm-inference-service] if body: [e2e-llm-inference-service] body = self.sanitize_for_serialization(body) [e2e-llm-inference-service] [e2e-llm-inference-service] # request url [e2e-llm-inference-service] if _host is None: [e2e-llm-inference-service] url = self.configuration.host + resource_path [e2e-llm-inference-service] else: [e2e-llm-inference-service] # use server/host defined in path or operation instead [e2e-llm-inference-service] url = _host + resource_path [e2e-llm-inference-service] [e2e-llm-inference-service] # perform request and return response [e2e-llm-inference-service] > response_data = self.request( [e2e-llm-inference-service] method, url, query_params=query_params, headers=header_params, [e2e-llm-inference-service] post_params=post_params, body=body, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs/router-managed-autoscale-update-4ae0cfce' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] post_params=None, body=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Makes the HTTP request using RESTClient.""" [e2e-llm-inference-service] if method == "GET": [e2e-llm-inference-service] > return self.rest_client.GET(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs/router-managed-autoscale-update-4ae0cfce' [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] query_params = [], _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] > return self.request("GET", url, [e2e-llm-inference-service] headers=headers, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] query_params=query_params) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs/router-managed-autoscale-update-4ae0cfce' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] body=None, post_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Perform requests. [e2e-llm-inference-service] [e2e-llm-inference-service] :param method: http request method [e2e-llm-inference-service] :param url: http request url [e2e-llm-inference-service] :param query_params: query parameters in the url [e2e-llm-inference-service] :param headers: http request headers [e2e-llm-inference-service] :param body: request json body, for `application/json` [e2e-llm-inference-service] :param post_params: request post parameters, [e2e-llm-inference-service] `application/x-www-form-urlencoded` [e2e-llm-inference-service] and `multipart/form-data` [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] """ [e2e-llm-inference-service] method = method.upper() [e2e-llm-inference-service] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-llm-inference-service] 'PATCH', 'OPTIONS'] [e2e-llm-inference-service] [e2e-llm-inference-service] if post_params and body: [e2e-llm-inference-service] raise ApiValueError( [e2e-llm-inference-service] "body parameter cannot be used with post_params parameter." [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] post_params = post_params or {} [e2e-llm-inference-service] headers = headers or {} [e2e-llm-inference-service] [e2e-llm-inference-service] timeout = None [e2e-llm-inference-service] if _request_timeout: [e2e-llm-inference-service] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-llm-inference-service] timeout = urllib3.Timeout(total=_request_timeout) [e2e-llm-inference-service] elif (isinstance(_request_timeout, tuple) and [e2e-llm-inference-service] len(_request_timeout) == 2): [e2e-llm-inference-service] timeout = urllib3.Timeout( [e2e-llm-inference-service] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-llm-inference-service] [e2e-llm-inference-service] if 'Content-Type' not in headers: [e2e-llm-inference-service] headers['Content-Type'] = 'application/json' [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-llm-inference-service] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] url += '?' + urlencode(query_params) [e2e-llm-inference-service] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-llm-inference-service] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-llm-inference-service] if headers['Content-Type'] == 'application/json-patch+json': [e2e-llm-inference-service] if not isinstance(body, list): [e2e-llm-inference-service] headers['Content-Type'] = \ [e2e-llm-inference-service] 'application/strategic-merge-patch+json' [e2e-llm-inference-service] request_body = None [e2e-llm-inference-service] if body is not None: [e2e-llm-inference-service] request_body = json.dumps(body) [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=False, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'multipart/form-data': [e2e-llm-inference-service] # must del headers['Content-Type'], or the correct [e2e-llm-inference-service] # Content-Type which generated by urllib3 will be [e2e-llm-inference-service] # overwritten. [e2e-llm-inference-service] del headers['Content-Type'] [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=True, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] # Pass a `string` parameter directly in the body to support [e2e-llm-inference-service] # other content types than Json when `body` argument is [e2e-llm-inference-service] # provided in serialized form [e2e-llm-inference-service] elif isinstance(body, str) or isinstance(body, bytes): [e2e-llm-inference-service] request_body = body [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] else: [e2e-llm-inference-service] # Cannot generate the request from given parameters [e2e-llm-inference-service] msg = """Cannot prepare a request message for provided [e2e-llm-inference-service] arguments. Please check that your arguments match [e2e-llm-inference-service] declared content type.""" [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] # For `GET`, `HEAD` [e2e-llm-inference-service] else: [e2e-llm-inference-service] r = self.pool_manager.request(method, url, [e2e-llm-inference-service] fields=query_params, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] except urllib3.exceptions.SSLError as e: [e2e-llm-inference-service] msg = "{0}\n{1}".format(type(e).__name__, str(e)) [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] [e2e-llm-inference-service] if _preload_content: [e2e-llm-inference-service] r = RESTResponse(r) [e2e-llm-inference-service] [e2e-llm-inference-service] # In the python 3, the response.data is bytes. [e2e-llm-inference-service] # we need to decode it to string. [e2e-llm-inference-service] if six.PY3: [e2e-llm-inference-service] r.data = r.data.decode('utf8') [e2e-llm-inference-service] [e2e-llm-inference-service] # log response body [e2e-llm-inference-service] logger.debug("response body: %s", r.data) [e2e-llm-inference-service] [e2e-llm-inference-service] if not 200 <= r.status <= 299: [e2e-llm-inference-service] > raise ApiException(http_resp=r) [e2e-llm-inference-service] E kubernetes.client.exceptions.ApiException: (404) [e2e-llm-inference-service] E Reason: Not Found [e2e-llm-inference-service] E HTTP response headers: HTTPHeaderDict({'Audit-Id': 'b30525ce-b370-4e83-baa9-7783e00379c9', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '338'}) [e2e-llm-inference-service] E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"llminferenceserviceconfigs.serving.kserve.io \"router-managed-autoscale-update-4ae0cfce\" not found","reason":"NotFound","details":{"name":"router-managed-autoscale-update-4ae0cfce","group":"serving.kserve.io","kind":"llminferenceserviceconfigs"},"code":404} [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:238: ApiException [e2e-llm-inference-service] [e2e-llm-inference-service] During handling of the above exception, another exception occurred: [e2e-llm-inference-service] [e2e-llm-inference-service] request = > [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.fixture(scope="function") [e2e-llm-inference-service] def test_case(request): [e2e-llm-inference-service] tc = request.param [e2e-llm-inference-service] created_configs = [] [e2e-llm-inference-service] [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = KServeClient( [e2e-llm-inference-service] config_file=os.environ.get("KUBECONFIG", "~/.kube/config"), [e2e-llm-inference-service] client_configuration=client.Configuration(), [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] # Execute before test hooks [e2e-llm-inference-service] try: [e2e-llm-inference-service] for func in tc.before_test: [e2e-llm-inference-service] func() [e2e-llm-inference-service] except Exception as before_test_error: [e2e-llm-inference-service] raise RuntimeError( [e2e-llm-inference-service] f"Failed to execute before test hook: {before_test_error}" [e2e-llm-inference-service] ) from before_test_error [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] # Validate base_refs defined in the test fixture exist in LLMINFERENCESERVICE_CONFIGS [e2e-llm-inference-service] missing_refs = [ [e2e-llm-inference-service] ref for ref in tc.base_refs if ref not in LLMINFERENCESERVICE_CONFIGS [e2e-llm-inference-service] ] [e2e-llm-inference-service] if missing_refs: [e2e-llm-inference-service] raise ValueError( [e2e-llm-inference-service] f"Missing base_refs in LLMINFERENCESERVICE_CONFIGS: {missing_refs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] if not tc.service_name: [e2e-llm-inference-service] tc.service_name = generate_service_name(request.node.name, tc.base_refs) [e2e-llm-inference-service] tc.model_name = _get_model_name_from_configs(tc.base_refs) [e2e-llm-inference-service] [e2e-llm-inference-service] # Create unique configs for this test [e2e-llm-inference-service] unique_base_refs = [] [e2e-llm-inference-service] for base_ref in tc.base_refs: [e2e-llm-inference-service] unique_config_name = generate_k8s_safe_suffix(base_ref, [tc.service_name]) [e2e-llm-inference-service] unique_base_refs.append(unique_config_name) [e2e-llm-inference-service] [e2e-llm-inference-service] original_spec = LLMINFERENCESERVICE_CONFIGS[base_ref] [e2e-llm-inference-service] [e2e-llm-inference-service] unique_config_body = { [e2e-llm-inference-service] "apiVersion": "serving.kserve.io/v1alpha1", [e2e-llm-inference-service] "kind": "LLMInferenceServiceConfig", [e2e-llm-inference-service] "metadata": { [e2e-llm-inference-service] "name": unique_config_name, [e2e-llm-inference-service] "namespace": KSERVE_TEST_NAMESPACE, [e2e-llm-inference-service] }, [e2e-llm-inference-service] "spec": original_spec, [e2e-llm-inference-service] } [e2e-llm-inference-service] [e2e-llm-inference-service] > _create_or_update_llmisvc_config( [e2e-llm-inference-service] kserve_client, unique_config_body, KSERVE_TEST_NAMESPACE [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/fixtures.py:1164: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] llm_config = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test' [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_or_update_llmisvc_config(kserve_client, llm_config, namespace=None): [e2e-llm-inference-service] """Create or update an LLMInferenceServiceConfig resource.""" [e2e-llm-inference-service] version = llm_config["apiVersion"].split("/")[1] [e2e-llm-inference-service] [e2e-llm-inference-service] if namespace is None: [e2e-llm-inference-service] namespace = llm_config.get("metadata", {}).get("namespace", "default") [e2e-llm-inference-service] [e2e-llm-inference-service] name = llm_config.get("metadata", {}).get("name") [e2e-llm-inference-service] if not name: [e2e-llm-inference-service] raise ValueError("LLMInferenceServiceConfig must have a name in metadata") [e2e-llm-inference-service] [e2e-llm-inference-service] logger.info(f"Checking LLMInferenceServiceConfig {name} in namespace {namespace}") [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] existing_config = kserve_client.api_instance.get_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICECONFIG, [e2e-llm-inference-service] name, [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llm_config["metadata"] = existing_config["metadata"] [e2e-llm-inference-service] [e2e-llm-inference-service] outputs = kserve_client.api_instance.replace_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICECONFIG, [e2e-llm-inference-service] name, [e2e-llm-inference-service] llm_config, [e2e-llm-inference-service] ) [e2e-llm-inference-service] logger.info(f"✓ Successfully updated LLMInferenceServiceConfig {name}") [e2e-llm-inference-service] return outputs [e2e-llm-inference-service] [e2e-llm-inference-service] except client.rest.ApiException as e: [e2e-llm-inference-service] if e.status == 404: # Not found - create it [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"Resource not found, creating LLMInferenceServiceConfig {name}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] > outputs = kserve_client.api_instance.create_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICECONFIG, [e2e-llm-inference-service] llm_config, [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/fixtures.py:1306: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceserviceconfigs' [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] [e2e-llm-inference-service] def create_namespaced_custom_object(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-llm-inference-service] """create_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Creates a namespace scoped Custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.create_namespaced_custom_object(group, version, namespace, plural, body, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: The custom resource's group name (required) [e2e-llm-inference-service] :param str version: The custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param object body: The JSON schema of the Resource to create. (required) [e2e-llm-inference-service] :param str pretty: If 'true', then the output is pretty printed. [e2e-llm-inference-service] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-llm-inference-service] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-llm-inference-service] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: object [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] kwargs['_return_http_data_only'] = True [e2e-llm-inference-service] > return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:231: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceserviceconfigs' [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...], 'au...4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}}, ...} [e2e-llm-inference-service] all_params = ['group', 'version', 'namespace', 'plural', 'body', 'pretty', ...] [e2e-llm-inference-service] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceserviceconfigs', 'version': 'v1alpha1'} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] def create_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, body, **kwargs): # noqa: E501 [e2e-llm-inference-service] """create_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Creates a namespace scoped Custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: The custom resource's group name (required) [e2e-llm-inference-service] :param str version: The custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: The custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param object body: The JSON schema of the Resource to create. (required) [e2e-llm-inference-service] :param str pretty: If 'true', then the output is pretty printed. [e2e-llm-inference-service] :param str dry_run: When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed [e2e-llm-inference-service] :param str field_manager: fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. [e2e-llm-inference-service] :param str field_validation: fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. (optional) [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] [e2e-llm-inference-service] local_var_params = locals() [e2e-llm-inference-service] [e2e-llm-inference-service] all_params = [ [e2e-llm-inference-service] 'group', [e2e-llm-inference-service] 'version', [e2e-llm-inference-service] 'namespace', [e2e-llm-inference-service] 'plural', [e2e-llm-inference-service] 'body', [e2e-llm-inference-service] 'pretty', [e2e-llm-inference-service] 'dry_run', [e2e-llm-inference-service] 'field_manager', [e2e-llm-inference-service] 'field_validation' [e2e-llm-inference-service] ] [e2e-llm-inference-service] all_params.extend( [e2e-llm-inference-service] [ [e2e-llm-inference-service] 'async_req', [e2e-llm-inference-service] '_return_http_data_only', [e2e-llm-inference-service] '_preload_content', [e2e-llm-inference-service] '_request_timeout' [e2e-llm-inference-service] ] [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-llm-inference-service] if key not in all_params: [e2e-llm-inference-service] raise ApiTypeError( [e2e-llm-inference-service] "Got an unexpected keyword argument '%s'" [e2e-llm-inference-service] " to method create_namespaced_custom_object" % key [e2e-llm-inference-service] ) [e2e-llm-inference-service] local_var_params[key] = val [e2e-llm-inference-service] del local_var_params['kwargs'] [e2e-llm-inference-service] # verify the required parameter 'group' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['group'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `group` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'version' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['version'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `version` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'namespace' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['namespace'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `namespace` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'plural' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['plural'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `plural` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'body' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('body' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['body'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `body` when calling `create_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] collection_formats = {} [e2e-llm-inference-service] [e2e-llm-inference-service] path_params = {} [e2e-llm-inference-service] if 'group' in local_var_params: [e2e-llm-inference-service] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-llm-inference-service] if 'version' in local_var_params: [e2e-llm-inference-service] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-llm-inference-service] if 'namespace' in local_var_params: [e2e-llm-inference-service] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-llm-inference-service] if 'plural' in local_var_params: [e2e-llm-inference-service] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] if 'pretty' in local_var_params and local_var_params['pretty'] is not None: # noqa: E501 [e2e-llm-inference-service] query_params.append(('pretty', local_var_params['pretty'])) # noqa: E501 [e2e-llm-inference-service] if 'dry_run' in local_var_params and local_var_params['dry_run'] is not None: # noqa: E501 [e2e-llm-inference-service] query_params.append(('dryRun', local_var_params['dry_run'])) # noqa: E501 [e2e-llm-inference-service] if 'field_manager' in local_var_params and local_var_params['field_manager'] is not None: # noqa: E501 [e2e-llm-inference-service] query_params.append(('fieldManager', local_var_params['field_manager'])) # noqa: E501 [e2e-llm-inference-service] if 'field_validation' in local_var_params and local_var_params['field_validation'] is not None: # noqa: E501 [e2e-llm-inference-service] query_params.append(('fieldValidation', local_var_params['field_validation'])) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] header_params = {} [e2e-llm-inference-service] [e2e-llm-inference-service] form_params = [] [e2e-llm-inference-service] local_var_files = {} [e2e-llm-inference-service] [e2e-llm-inference-service] body_params = None [e2e-llm-inference-service] if 'body' in local_var_params: [e2e-llm-inference-service] body_params = local_var_params['body'] [e2e-llm-inference-service] # HTTP header `Accept` [e2e-llm-inference-service] header_params['Accept'] = self.api_client.select_header_accept( [e2e-llm-inference-service] ['application/json']) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] # Authentication setting [e2e-llm-inference-service] auth_settings = ['BearerToken'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] > return self.api_client.call_api( [e2e-llm-inference-service] '/apis/{group}/{version}/namespaces/{namespace}/{plural}', 'POST', [e2e-llm-inference-service] path_params, [e2e-llm-inference-service] query_params, [e2e-llm-inference-service] header_params, [e2e-llm-inference-service] body=body_params, [e2e-llm-inference-service] post_params=form_params, [e2e-llm-inference-service] files=local_var_files, [e2e-llm-inference-service] response_type='object', # noqa: E501 [e2e-llm-inference-service] auth_settings=auth_settings, [e2e-llm-inference-service] async_req=local_var_params.get('async_req'), [e2e-llm-inference-service] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-llm-inference-service] _preload_content=local_var_params.get('_preload_content', True), [e2e-llm-inference-service] _request_timeout=local_var_params.get('_request_timeout'), [e2e-llm-inference-service] collection_formats=collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:354: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}' [e2e-llm-inference-service] method = 'POST' [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceserviceconfigs', 'version': 'v1alpha1'} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def call_api(self, resource_path, method, [e2e-llm-inference-service] path_params=None, query_params=None, header_params=None, [e2e-llm-inference-service] body=None, post_params=None, files=None, [e2e-llm-inference-service] response_type=None, auth_settings=None, async_req=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-llm-inference-service] [e2e-llm-inference-service] To make an async_req request, set the async_req parameter. [e2e-llm-inference-service] [e2e-llm-inference-service] :param resource_path: Path to method endpoint. [e2e-llm-inference-service] :param method: Method to call. [e2e-llm-inference-service] :param path_params: Path parameters in the url. [e2e-llm-inference-service] :param query_params: Query parameters in the url. [e2e-llm-inference-service] :param header_params: Header parameters to be [e2e-llm-inference-service] placed in the request header. [e2e-llm-inference-service] :param body: Request body. [e2e-llm-inference-service] :param post_params dict: Request post form parameters, [e2e-llm-inference-service] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-llm-inference-service] :param auth_settings list: Auth Settings names for the request. [e2e-llm-inference-service] :param response: Response data type. [e2e-llm-inference-service] :param files dict: key -> filename, value -> filepath, [e2e-llm-inference-service] for `multipart/form-data`. [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param collection_formats: dict of collection formats for path, query, [e2e-llm-inference-service] header, and post parameters. [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: [e2e-llm-inference-service] If async_req parameter is True, [e2e-llm-inference-service] the request will be called asynchronously. [e2e-llm-inference-service] The method will return the request thread. [e2e-llm-inference-service] If parameter async_req is False or missing, [e2e-llm-inference-service] then the method will return the response directly. [e2e-llm-inference-service] """ [e2e-llm-inference-service] if not async_req: [e2e-llm-inference-service] > return self.__call_api(resource_path, method, [e2e-llm-inference-service] path_params, query_params, header_params, [e2e-llm-inference-service] body, post_params, files, [e2e-llm-inference-service] response_type, auth_settings, [e2e-llm-inference-service] _return_http_data_only, collection_formats, [e2e-llm-inference-service] _preload_content, _request_timeout, _host) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs' [e2e-llm-inference-service] method = 'POST' [e2e-llm-inference-service] path_params = [('group', 'serving.kserve.io'), ('version', 'v1alpha1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'llminferenceserviceconfigs')] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def __call_api( [e2e-llm-inference-service] self, resource_path, method, path_params=None, [e2e-llm-inference-service] query_params=None, header_params=None, body=None, post_params=None, [e2e-llm-inference-service] files=None, response_type=None, auth_settings=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] [e2e-llm-inference-service] config = self.configuration [e2e-llm-inference-service] [e2e-llm-inference-service] # header parameters [e2e-llm-inference-service] header_params = header_params or {} [e2e-llm-inference-service] header_params.update(self.default_headers) [e2e-llm-inference-service] if self.cookie: [e2e-llm-inference-service] header_params['Cookie'] = self.cookie [e2e-llm-inference-service] if header_params: [e2e-llm-inference-service] header_params = self.sanitize_for_serialization(header_params) [e2e-llm-inference-service] header_params = dict(self.parameters_to_tuples(header_params, [e2e-llm-inference-service] collection_formats)) [e2e-llm-inference-service] [e2e-llm-inference-service] # path parameters [e2e-llm-inference-service] if path_params: [e2e-llm-inference-service] path_params = self.sanitize_for_serialization(path_params) [e2e-llm-inference-service] path_params = self.parameters_to_tuples(path_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] for k, v in path_params: [e2e-llm-inference-service] # specified safe chars, encode everything [e2e-llm-inference-service] resource_path = resource_path.replace( [e2e-llm-inference-service] '{%s}' % k, [e2e-llm-inference-service] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] # query parameters [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] query_params = self.sanitize_for_serialization(query_params) [e2e-llm-inference-service] query_params = self.parameters_to_tuples(query_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] # post parameters [e2e-llm-inference-service] if post_params or files: [e2e-llm-inference-service] post_params = post_params if post_params else [] [e2e-llm-inference-service] post_params = self.sanitize_for_serialization(post_params) [e2e-llm-inference-service] post_params = self.parameters_to_tuples(post_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] post_params.extend(self.files_parameters(files)) [e2e-llm-inference-service] [e2e-llm-inference-service] # auth setting [e2e-llm-inference-service] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-llm-inference-service] [e2e-llm-inference-service] # body [e2e-llm-inference-service] if body: [e2e-llm-inference-service] body = self.sanitize_for_serialization(body) [e2e-llm-inference-service] [e2e-llm-inference-service] # request url [e2e-llm-inference-service] if _host is None: [e2e-llm-inference-service] url = self.configuration.host + resource_path [e2e-llm-inference-service] else: [e2e-llm-inference-service] # use server/host defined in path or operation instead [e2e-llm-inference-service] url = _host + resource_path [e2e-llm-inference-service] [e2e-llm-inference-service] # perform request and return response [e2e-llm-inference-service] > response_data = self.request( [e2e-llm-inference-service] method, url, query_params=query_params, headers=header_params, [e2e-llm-inference-service] post_params=post_params, body=body, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'POST' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] post_params = [] [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] post_params=None, body=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Makes the HTTP request using RESTClient.""" [e2e-llm-inference-service] if method == "GET": [e2e-llm-inference-service] return self.rest_client.GET(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif method == "HEAD": [e2e-llm-inference-service] return self.rest_client.HEAD(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif method == "OPTIONS": [e2e-llm-inference-service] return self.rest_client.OPTIONS(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] headers=headers, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout) [e2e-llm-inference-service] elif method == "POST": [e2e-llm-inference-service] > return self.rest_client.POST(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] headers=headers, [e2e-llm-inference-service] post_params=post_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:391: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs' [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] query_params = [], post_params = [] [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def POST(self, url, headers=None, query_params=None, post_params=None, [e2e-llm-inference-service] body=None, _preload_content=True, _request_timeout=None): [e2e-llm-inference-service] > return self.request("POST", url, [e2e-llm-inference-service] headers=headers, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] post_params=post_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] body=body) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:279: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'POST' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceserviceconfigs' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = {'apiVersion': 'serving.kserve.io/v1alpha1', 'kind': 'LLMInferenceServiceConfig', 'metadata': {'name': 'router-managed...pdate-4ae0cfce', 'namespace': 'kserve-ci-e2e-test'}, 'spec': {'router': {'gateway': {}, 'route': {}, 'scheduler': {}}}} [e2e-llm-inference-service] post_params = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] body=None, post_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Perform requests. [e2e-llm-inference-service] [e2e-llm-inference-service] :param method: http request method [e2e-llm-inference-service] :param url: http request url [e2e-llm-inference-service] :param query_params: query parameters in the url [e2e-llm-inference-service] :param headers: http request headers [e2e-llm-inference-service] :param body: request json body, for `application/json` [e2e-llm-inference-service] :param post_params: request post parameters, [e2e-llm-inference-service] `application/x-www-form-urlencoded` [e2e-llm-inference-service] and `multipart/form-data` [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] """ [e2e-llm-inference-service] method = method.upper() [e2e-llm-inference-service] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-llm-inference-service] 'PATCH', 'OPTIONS'] [e2e-llm-inference-service] [e2e-llm-inference-service] if post_params and body: [e2e-llm-inference-service] raise ApiValueError( [e2e-llm-inference-service] "body parameter cannot be used with post_params parameter." [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] post_params = post_params or {} [e2e-llm-inference-service] headers = headers or {} [e2e-llm-inference-service] [e2e-llm-inference-service] timeout = None [e2e-llm-inference-service] if _request_timeout: [e2e-llm-inference-service] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-llm-inference-service] timeout = urllib3.Timeout(total=_request_timeout) [e2e-llm-inference-service] elif (isinstance(_request_timeout, tuple) and [e2e-llm-inference-service] len(_request_timeout) == 2): [e2e-llm-inference-service] timeout = urllib3.Timeout( [e2e-llm-inference-service] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-llm-inference-service] [e2e-llm-inference-service] if 'Content-Type' not in headers: [e2e-llm-inference-service] headers['Content-Type'] = 'application/json' [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-llm-inference-service] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] url += '?' + urlencode(query_params) [e2e-llm-inference-service] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-llm-inference-service] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-llm-inference-service] if headers['Content-Type'] == 'application/json-patch+json': [e2e-llm-inference-service] if not isinstance(body, list): [e2e-llm-inference-service] headers['Content-Type'] = \ [e2e-llm-inference-service] 'application/strategic-merge-patch+json' [e2e-llm-inference-service] request_body = None [e2e-llm-inference-service] if body is not None: [e2e-llm-inference-service] request_body = json.dumps(body) [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=False, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'multipart/form-data': [e2e-llm-inference-service] # must del headers['Content-Type'], or the correct [e2e-llm-inference-service] # Content-Type which generated by urllib3 will be [e2e-llm-inference-service] # overwritten. [e2e-llm-inference-service] del headers['Content-Type'] [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=True, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] # Pass a `string` parameter directly in the body to support [e2e-llm-inference-service] # other content types than Json when `body` argument is [e2e-llm-inference-service] # provided in serialized form [e2e-llm-inference-service] elif isinstance(body, str) or isinstance(body, bytes): [e2e-llm-inference-service] request_body = body [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] else: [e2e-llm-inference-service] # Cannot generate the request from given parameters [e2e-llm-inference-service] msg = """Cannot prepare a request message for provided [e2e-llm-inference-service] arguments. Please check that your arguments match [e2e-llm-inference-service] declared content type.""" [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] # For `GET`, `HEAD` [e2e-llm-inference-service] else: [e2e-llm-inference-service] r = self.pool_manager.request(method, url, [e2e-llm-inference-service] fields=query_params, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] except urllib3.exceptions.SSLError as e: [e2e-llm-inference-service] msg = "{0}\n{1}".format(type(e).__name__, str(e)) [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] [e2e-llm-inference-service] if _preload_content: [e2e-llm-inference-service] r = RESTResponse(r) [e2e-llm-inference-service] [e2e-llm-inference-service] # In the python 3, the response.data is bytes. [e2e-llm-inference-service] # we need to decode it to string. [e2e-llm-inference-service] if six.PY3: [e2e-llm-inference-service] r.data = r.data.decode('utf8') [e2e-llm-inference-service] [e2e-llm-inference-service] # log response body [e2e-llm-inference-service] logger.debug("response body: %s", r.data) [e2e-llm-inference-service] [e2e-llm-inference-service] if not 200 <= r.status <= 299: [e2e-llm-inference-service] > raise ApiException(http_resp=r) [e2e-llm-inference-service] E kubernetes.client.exceptions.ApiException: (500) [e2e-llm-inference-service] E Reason: Internal Server Error [e2e-llm-inference-service] E HTTP response headers: HTTPHeaderDict({'Audit-Id': '6221d5ff-29a8-4e17-a028-1bb322919ed7', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '701'}) [e2e-llm-inference-service] E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"llminferenceserviceconfig.kserve-webhook-server.v1alpha1.validator\": failed to call webhook: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/validate-serving-kserve-io-v1alpha1-llminferenceserviceconfig?timeout=10s\": EOF","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"llminferenceserviceconfig.kserve-webhook-server.v1alpha1.validator\": failed to call webhook: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/validate-serving-kserve-io-v1alpha1-llminferenceserviceconfig?timeout=10s\": EOF"}]},"code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:238: ApiException [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-update-4ae0cfce in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-update-4ae0cfce [e2e-llm-inference-service] =================================== FAILURES =================================== [e2e-llm-inference-service] _ test_llm_autoscaling_hpa_deployment[router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', ser... {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_hpa [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-hpa", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-hpa-deploy", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_hpa_deployment(test_case: TestCase): [e2e-llm-inference-service] """HPA + Deployment: VA and HPA exist; pods scale up under load.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:355: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', ser... {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...-repl-38916baa'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T19:20:42.391240', start_time = 1777058442.3914948 [e2e-llm-inference-service] duration = 900.0863747596741, timestamp_end = '2026-04-24T19:35:42.477882' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...tor-no-repl-38916baa'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cf101ee80> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-hpa-de-ec1dce8b in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-hpa-de-ec1dce8b [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-hpa-de-ec1dce8b [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-38916baa in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-38916baa [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-38916baa [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-hpa-autoscale-hpa-deplo-347a3180 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-hpa-autoscale-hpa-deplo-347a3180 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-hpa-autoscale-hpa-deplo-347a3180 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_hpa_deployment] [2026-04-24T19:20:42.340997] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', service_name='autoscale-hpa-deploy', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-de-ec1dce8b'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-38916baa'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T19:20:42.354168] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-de-ec1dce8b'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-38916baa'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T19:20:42.391152] end - ✅ in 0.037s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T19:20:42.391240] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-de-ec1dce8b'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-38916baa'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:42.391501] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:42.396674] end - ✅ in 0.005s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:43.396937] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:43.427532] end - ✅ in 0.030s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:44.427800] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:44.527584] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:45.527856] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:45.534794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:46.535010] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:46.541604] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:47.541847] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:47.549555] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:48.549829] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:48.557595] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:49.558054] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:49.565845] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:50.566180] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:50.574224] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:51.574501] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:51.581700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:52.582017] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:52.589345] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:53.589599] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:53.627437] end - ✅ in 0.038s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:54.627748] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:54.634510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:55.634768] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:55.642290] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:56.642619] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:56.649627] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:57.649943] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:57.657493] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:58.657879] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:58.727060] end - ✅ in 0.069s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:20:59.727378] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:20:59.734505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:00.734775] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:00.742104] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:01.742490] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:01.750136] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:02.750607] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:02.758414] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:03.758664] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:03.765748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:04.766270] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:04.773459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:05.773754] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:05.781846] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:06.782159] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:06.790360] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:07.790788] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:07.798792] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:08.799191] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:08.806906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:09.807193] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:09.814427] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:10.814698] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:10.822354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:11.822681] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:11.829702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:12.829999] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:12.843980] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:13.844335] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:13.852072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:14.852403] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:14.859646] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:15.859919] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:15.867337] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:16.867817] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:16.875560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:17.876022] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:17.884050] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:18.884385] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:18.891634] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:19.891876] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:19.899156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:20.899461] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:20.914023] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:21.914357] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:21.922216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:22.922610] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:22.930379] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:23.930651] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:23.941037] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:24.941508] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:24.949019] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:25.949537] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:25.957685] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:26.957947] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:26.965389] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:27.965862] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:27.974127] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:28.974612] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:28.982163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:29.982513] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:29.989975] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:30.990527] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:30.998611] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:31.999092] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:32.008752] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:33.009204] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:33.017020] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:34.017359] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:34.025271] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:35.025779] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:35.033030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:36.033630] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:36.041603] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:37.041909] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:37.049626] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:38.049912] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:38.057533] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:39.057789] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:39.065122] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:40.065461] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:40.072265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:41.072621] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:41.080280] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:42.080625] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:42.088365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:43.088672] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:43.096151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:44.096451] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:44.104449] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:45.104736] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:45.113588] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:46.113869] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:46.120819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:47.121117] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:47.128784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:48.129127] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:48.137176] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:49.137490] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:49.145002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:50.145333] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:50.157931] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:51.158208] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:51.165186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:52.165541] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:52.173880] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:53.174365] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:53.181863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:54.182148] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:54.191983] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:55.192352] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:55.200230] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:56.200766] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:56.208949] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:57.209285] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:57.216591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:58.217008] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:58.224806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:21:59.225125] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:21:59.232576] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:00.232854] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:00.244563] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:01.244887] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:01.252612] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:02.253024] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:02.260154] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:03.260477] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:03.268145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:04.268609] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:04.276182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:05.276669] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:05.283676] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:06.283987] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:06.291570] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:07.291877] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:07.307109] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:08.307432] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:08.315259] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:09.315741] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:09.322920] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:10.323238] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:10.330892] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:11.331175] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:11.338698] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:12.339062] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:12.346291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:13.346568] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:13.354069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:14.354506] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:14.361929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:15.362496] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:15.370572] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:16.370981] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:16.378414] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:17.378734] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:17.385976] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:18.386268] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:18.394069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:19.394354] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:19.401812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:20.402123] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:20.409217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:21.409680] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:21.417122] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:22.417699] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:22.424757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:23.425058] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:23.432635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:24.433151] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:24.441092] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:25.441497] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:25.450043] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:26.450473] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:26.457804] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:27.458087] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:27.467271] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:28.467601] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:28.475482] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:29.475950] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:29.484274] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:30.484769] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:30.492397] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:31.492740] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:31.500353] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:32.500674] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:32.508099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:33.508675] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:33.517000] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:34.517553] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:34.525530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:35.526014] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:35.533623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:36.533906] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:36.541444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:37.541936] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:37.548785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:38.549113] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:38.557019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:39.557384] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:39.565447] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:40.565882] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:40.573479] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:41.573963] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:41.581794] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:42.582139] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:42.589331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:43.589749] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:43.599057] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:44.599378] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:44.610345] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:45.610679] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:45.617919] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:46.618199] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:46.625977] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:47.626493] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:47.634463] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:48.634707] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:48.642221] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:49.642600] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:49.651562] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:50.651862] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:50.658687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:51.658983] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:51.666094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:52.666375] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:52.674852] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:53.675185] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:53.683458] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:54.684067] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:54.692035] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:55.692371] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:55.699971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:56.700260] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:56.707681] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:57.707972] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:57.715474] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:58.715762] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:58.722565] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:22:59.722887] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:22:59.730364] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:00.730650] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:00.738084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:01.738495] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:01.746954] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:02.747474] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:02.754886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:03.755395] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:03.763609] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:04.764068] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:04.778835] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:05.779193] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:05.787501] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:06.787871] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:06.799793] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:07.800088] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:07.807505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:08.807798] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:08.815480] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:09.815760] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:09.822493] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:10.822797] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:10.829774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:11.830100] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:11.837870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:12.838281] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:12.846443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:13.846771] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:13.854887] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:14.855353] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:14.862875] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:15.863347] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:15.871394] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:16.871668] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:16.880627] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:17.881056] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:17.888869] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:18.889210] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:18.896465] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:19.896802] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:19.904370] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:20.904690] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:20.912131] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:21.912672] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:21.920475] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:22.920948] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:23.028366] end - ✅ in 0.107s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:24.028727] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:24.127138] end - ✅ in 0.098s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:25.127460] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:25.135011] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:26.135373] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:26.142871] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:27.143255] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:27.227547] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:28.227994] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:28.235150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:29.235483] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:29.243186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:30.243466] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:30.326893] end - ✅ in 0.083s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:31.327211] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:31.334410] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:32.334712] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:32.342060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:33.342560] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:33.350180] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:34.350487] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:34.358218] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:35.358745] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:35.429571] end - ✅ in 0.071s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:36.429867] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:36.527626] end - ✅ in 0.097s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:37.527958] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:37.536215] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:38.536681] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:38.628339] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:39.628753] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:39.827353] end - ✅ in 0.198s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:40.827692] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:40.837895] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:41.838240] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:41.846251] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:42.846566] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:42.855716] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:43.856140] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:43.863188] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:44.863586] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:44.871099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:45.871532] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:45.878597] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:46.878881] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:46.886153] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:47.886668] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:47.894496] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:48.894821] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:48.902841] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:49.903147] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:49.910883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:50.911203] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:50.918797] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:51.919106] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:51.926522] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:52.926984] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:52.934758] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:53.935166] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:53.943144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:54.943452] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:54.950497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:55.950799] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:55.957750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:56.958065] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:56.965560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:57.965869] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:57.973543] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:58.973855] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:58.981821] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:23:59.982164] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:23:59.990192] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:00.990655] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:00.997849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:01.998148] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:02.007458] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:03.007833] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:03.016144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:04.016634] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:04.024127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:05.024493] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:05.032387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:06.032879] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:06.040807] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:07.041146] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:07.049096] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:08.049399] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:08.127163] end - ✅ in 0.078s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:09.127497] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:09.135040] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:10.135371] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:10.143003] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:11.143365] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:11.150963] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:12.151352] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:12.158742] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:13.159208] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:13.167461] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:14.167761] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:14.176874] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:15.177221] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:15.185394] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:16.185751] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:16.193215] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:17.193707] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:17.209355] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:18.209872] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:18.217753] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:19.218065] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:19.225070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:20.225387] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:20.233073] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:21.233538] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:21.240973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:22.241256] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:22.249560] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:23.250036] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:23.257979] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:24.258373] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:24.266807] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:25.267140] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:25.274476] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:26.274788] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:26.282158] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:27.282513] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:27.289715] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:28.290031] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:28.297519] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:29.297798] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:29.304991] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:30.305419] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:30.313479] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:31.313821] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:31.320779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:32.321079] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:32.329136] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:33.329511] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:33.340509] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:34.340834] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:34.349041] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:35.349524] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:35.358072] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:36.358379] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:36.366011] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:37.366348] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:37.373897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:38.374196] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:38.393810] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:39.394275] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:39.401960] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:40.402385] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:40.410547] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:41.410843] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:41.420100] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:42.420545] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:42.428714] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:43.429153] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:43.437041] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:44.437600] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:44.445963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:45.446396] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:45.454570] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:46.454895] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:46.462170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:47.462506] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:47.469943] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:48.470253] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:48.477874] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:49.478128] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:49.485099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:50.485580] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:50.492959] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:51.493529] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:51.501246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:52.501739] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:52.510279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:53.510591] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:53.518363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:54.518655] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:54.526961] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:55.527507] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:55.535469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:56.535799] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:56.543358] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:57.543778] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:57.551183] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:58.551664] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:58.558816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:24:59.559134] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:24:59.566769] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:00.567057] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:00.574795] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:01.575091] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:01.582749] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:02.583046] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:02.590672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:03.591046] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:03.599018] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:04.599341] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:04.608338] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:05.608727] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:05.616864] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:06.617349] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:06.625155] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:07.625472] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:07.633480] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:08.633810] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:08.641371] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:09.641665] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:09.648713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:10.648983] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:10.656282] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:11.656660] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:11.664532] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:12.664821] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:12.672628] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:13.673179] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:13.680554] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:14.680848] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:14.687841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:15.688125] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:15.695635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:16.696004] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:16.703501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:17.703956] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:17.711186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:18.711539] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:18.719030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:19.719316] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:19.726656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:20.726951] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:20.734137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:21.734433] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:21.742450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:22.742847] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:22.761909] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:23.762374] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:23.770177] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:24.770780] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:24.780210] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:25.780512] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:25.788220] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:26.788678] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:26.795508] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:27.795755] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:27.802915] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:28.803230] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:28.810404] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:29.810715] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:29.817925] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:30.818253] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:30.825870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:31.826186] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:31.834458] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:32.834890] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:32.842090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:33.842463] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:33.850287] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:34.850738] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:34.858890] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:35.859379] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:35.867107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:36.867436] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:36.874677] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:37.875149] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:37.882917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:38.883372] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:38.891223] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:39.891627] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:39.910027] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:40.910550] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:40.918356] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:41.918828] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:41.928350] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:42.928690] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:42.936493] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:43.936864] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:43.944702] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:44.945011] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:44.952913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:45.953236] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:45.960912] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:46.961206] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:46.968598] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:47.969042] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:47.976461] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:48.976794] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:48.985104] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:49.985395] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:49.993559] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:50.993980] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:51.001238] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:52.001575] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:52.009408] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:53.009672] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:53.017709] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:54.018002] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:54.025510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:55.025798] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:55.033128] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:56.033694] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:56.041085] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:57.041596] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:57.048917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:58.049378] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:58.056816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:25:59.057111] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:25:59.065177] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:00.065724] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:00.075840] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:01.076150] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:01.083635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:02.084145] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:02.092010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:03.092368] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:03.100824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:04.101271] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:04.107990] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:05.108330] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:05.116105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:06.116394] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:06.126396] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:07.126720] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:07.135735] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:08.136213] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:08.143897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:09.144378] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:09.152164] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:10.152685] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:10.160682] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:11.161024] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:11.169598] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:12.169968] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:12.178251] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:13.178583] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:13.186055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:14.186377] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:14.194162] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:15.194501] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:15.201673] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:16.201957] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:16.210134] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:17.210491] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:17.232019] end - ✅ in 0.021s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:18.232373] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:18.427665] end - ✅ in 0.195s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:19.427997] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:19.528177] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:20.528521] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:20.727677] end - ✅ in 0.199s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:21.728027] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:22.028227] end - ✅ in 0.300s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:23.028703] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:23.127639] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:24.127958] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:24.228029] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:25.228386] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:25.328082] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:26.328396] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:26.335682] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:27.335998] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:27.427689] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:28.428061] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:28.527675] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:29.528078] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:29.545482] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:30.545947] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:30.627539] end - ✅ in 0.081s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:31.628039] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:31.635941] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:32.636397] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:32.644107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:33.644433] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:33.654250] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:34.654655] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:34.662427] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:35.662740] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:35.670432] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:36.670749] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:36.677979] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:37.678434] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:37.689053] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:38.689560] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:38.698423] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:39.698793] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:39.727780] end - ✅ in 0.029s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:40.728085] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:40.736231] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:41.736760] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:41.744826] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:42.745139] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:42.752976] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:43.753358] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:43.761191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:44.761546] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:44.769385] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:45.769742] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:45.777973] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:46.778347] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:46.785703] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:47.786022] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:47.793820] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:48.794194] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:48.801970] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:49.802458] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:49.810509] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:50.810872] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:50.818873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:51.819263] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:51.827631] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:52.827884] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:52.835150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:53.835524] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:53.843495] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:54.843884] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:54.851886] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:55.852202] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:55.859837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:56.860126] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:56.867847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:57.868190] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:57.876134] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:58.876623] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:58.884018] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:59.884354] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:59.927593] end - ✅ in 0.043s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:00.928082] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:00.936407] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:01.936820] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:01.944189] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:02.944542] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:02.953146] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:03.953767] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:03.961879] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:04.962400] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:04.970558] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:05.970855] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:05.978757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:06.979040] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:06.985958] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:07.986282] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:07.997853] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:08.998135] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:09.009243] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:10.009717] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:10.017857] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:11.018111] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:11.025908] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:12.026180] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:12.033982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:13.034280] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:13.042133] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:14.042424] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:14.049992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:15.050281] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:15.058381] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:16.058681] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:16.066571] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:17.066848] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:17.074779] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:18.075133] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:18.082935] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:19.083421] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:19.091105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:20.091624] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:20.110698] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:21.111011] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:21.118779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:22.119077] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:22.126940] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:23.127264] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:23.135038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:24.135336] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:24.145884] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:25.146137] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:25.153907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:26.154207] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:26.166931] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:27.167236] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:27.174736] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:28.175176] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:28.182254] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:29.182564] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:29.190203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:30.190550] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:30.198280] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:31.198607] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:31.206461] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:32.206744] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:32.214390] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:33.214718] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:33.222755] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:34.223057] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:34.231462] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:35.231842] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:35.239684] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:36.239972] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:36.247538] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:37.247888] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:37.256273] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:38.256826] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:38.264425] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:39.264755] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:39.272118] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:40.272624] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:40.279983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:41.280268] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:41.288103] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:42.288362] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:42.302145] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:43.302688] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:43.311003] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:44.311341] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:44.318951] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:45.319389] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:45.327206] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:46.327571] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:46.335800] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:47.336247] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:47.345071] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:48.345539] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:48.352648] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:49.352951] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:49.360066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:50.360620] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:50.368651] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:51.368944] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:51.376640] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:52.376980] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:52.385165] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:53.385675] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:53.394716] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:54.395256] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:54.403283] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:55.403809] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:55.412007] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:56.412271] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:56.419385] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:57.419706] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:57.427013] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:58.427292] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:58.434176] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:59.434463] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:59.441694] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:00.441994] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:00.449501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:01.449857] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:01.457604] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:02.458078] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:02.465871] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:03.466229] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:03.475117] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:04.475662] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:04.482675] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:05.482963] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:05.490704] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:06.491028] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:06.499003] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:07.499348] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:07.506978] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:08.507271] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:08.514734] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:09.515105] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:09.521994] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:10.522274] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:10.529862] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:11.530202] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:11.537981] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:12.538360] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:12.545699] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:13.545994] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:13.553748] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:14.554160] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:14.564877] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:15.565338] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:15.574402] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:16.574936] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:16.585152] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:17.585670] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:17.593169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:18.593690] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:18.600825] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:19.601118] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:19.609086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:20.609482] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:20.616793] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:21.617076] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:21.624815] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:22.625125] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:22.632557] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:23.633033] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:23.640661] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:24.641128] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:24.649174] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:25.649510] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:25.657961] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:26.658354] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:26.665841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:27.666355] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:27.676883] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:28.677888] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:28.687455] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:29.687795] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:29.695106] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:30.695648] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:30.702792] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:31.703230] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:31.710682] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:32.711225] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:32.718690] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:33.719196] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:33.730834] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:34.731166] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:34.739010] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:35.739472] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:35.746849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:36.747138] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:36.755088] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:37.755438] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:37.763251] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:38.763778] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:38.771099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:39.771396] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:39.779761] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:40.780227] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:40.788119] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:41.788727] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:41.796277] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:42.796562] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:42.804151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:43.804582] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:43.812488] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:44.812763] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:44.820171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:45.820557] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:45.832160] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:46.832734] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:46.841144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:47.841623] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:47.849778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:48.850072] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:48.857858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:49.858142] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:49.866147] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:50.866514] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:50.876062] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:51.876553] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:51.884007] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:52.884598] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:52.892286] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:53.892624] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:53.900193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:54.900505] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:54.907940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:55.908260] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:55.917807] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:56.918115] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:56.925417] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:57.925760] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:57.935388] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:58.935717] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:58.946459] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:59.946820] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:59.954862] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:00.955213] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:00.962926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:01.963271] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:01.970255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:02.970789] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:02.978980] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:03.979353] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:03.987883] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:04.988166] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:04.996206] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:05.996679] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:06.004491] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:07.004982] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:07.013140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:08.013454] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:08.020911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:09.021231] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:09.029395] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:10.029798] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:10.039090] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:11.039540] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:11.047364] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:12.047688] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:12.055612] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:13.055912] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:13.064125] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:14.064643] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:14.072636] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:15.072909] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:15.080691] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:16.080980] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:16.090001] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:17.090428] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:17.104642] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:18.105003] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:18.112531] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:19.113002] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:19.121031] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:20.121343] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:20.129013] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:21.129336] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:21.139549] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:22.139990] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:22.148764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:23.149226] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:23.156835] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:24.157168] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:24.166158] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:25.166523] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:25.174372] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:26.174665] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:26.182162] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:27.182613] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:27.189902] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:28.190160] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:28.198001] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:29.198387] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:29.205819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:30.206195] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:30.213557] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:31.214048] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:31.221679] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:32.222035] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:32.230502] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:33.230787] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:33.243551] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:34.244036] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:34.254132] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:35.254438] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:35.261949] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:36.262536] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:36.270142] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:37.270420] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:37.327452] end - ✅ in 0.057s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:38.327785] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:38.335710] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:39.336152] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:39.344080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:40.344517] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:40.352470] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:41.352825] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:41.360920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:42.361345] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:42.368650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:43.368954] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:43.376251] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:44.376642] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:44.384126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:45.384482] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:45.392178] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:46.392693] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:46.400629] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:47.401001] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:47.408982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:48.409345] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:48.416905] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:49.417257] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:49.424820] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:50.425179] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:50.434024] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:51.434390] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:51.442204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:52.442542] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:52.450615] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:53.450919] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:53.458368] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:54.458658] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:54.466532] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:55.466810] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:55.475473] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:56.475772] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:56.483731] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:57.484037] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:57.491405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:58.491708] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:58.499658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:59.499954] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:59.507535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:00.507827] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:00.515474] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:01.515943] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:01.523885] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:02.524210] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:02.532074] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:03.532394] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:03.540257] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:04.540575] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:04.548393] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:05.548686] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:05.556528] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:06.556949] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:06.564833] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:07.565358] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:07.576076] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:08.576544] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:08.587194] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:09.587631] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:09.599509] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:10.599797] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:10.607376] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:11.607734] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:11.615403] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:12.615813] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:12.623475] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:13.623876] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:13.631039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:14.631386] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:14.638838] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:15.639358] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:15.646801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:16.647087] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:16.654717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:17.655108] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:17.662425] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:18.662908] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:18.670221] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:19.670503] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:19.677649] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:20.677981] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:20.685226] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:21.685516] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:21.693135] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:22.693521] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:22.700536] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:23.700838] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:23.708321] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:24.708659] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:24.717255] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:25.717758] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:25.725648] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:26.725951] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:26.733771] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:27.734058] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:27.741964] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:28.742333] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:28.750392] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:29.750873] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:29.758862] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:30.759169] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:30.766172] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:31.766508] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:31.774226] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:32.774576] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:32.781831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:33.782269] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:33.791036] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:34.791397] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:34.800344] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:35.800645] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:35.808195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:36.808539] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:36.815958] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:37.816418] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:37.824108] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:38.824612] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:38.831847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:39.832217] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:39.842049] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:40.842562] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:40.850669] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:41.850973] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:41.858355] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:42.858816] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:42.867267] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:43.867622] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:43.875489] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:44.875823] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:44.883475] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:45.883941] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:45.891633] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:46.891944] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:46.901042] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:47.901388] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:47.909464] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:48.909822] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:48.926405] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:49.926704] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:49.937832] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:50.938286] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:50.946084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:51.946371] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:51.954732] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:52.955022] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:52.962211] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:53.962627] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:53.970222] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:54.970733] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:54.978915] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:55.979193] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:55.987009] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:56.987356] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:56.994462] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:57.994880] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:58.003114] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:59.003436] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:59.011790] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:00.012164] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:00.020190] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:01.020484] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:01.028194] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:02.028567] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:02.036237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:03.036543] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:03.044854] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:04.045364] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:04.054400] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:05.054684] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:05.062558] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:06.062817] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:06.070163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:07.070634] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:07.081568] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:08.081849] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:08.090271] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:09.090753] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:09.098113] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:10.098639] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:10.106449] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:11.106874] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:11.114216] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:12.114568] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:12.122858] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:13.123180] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:13.132250] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:14.132734] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:14.143132] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:15.143496] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:15.151494] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:16.151975] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:16.160017] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:17.160360] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:17.174160] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:18.174507] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:18.182597] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:19.182940] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:19.192346] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:20.192694] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:20.201118] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:21.201638] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:21.210506] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:22.210761] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:22.218761] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:23.219110] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:23.226527] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:24.226891] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:24.234787] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:25.235074] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:25.242986] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:26.243355] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:26.251217] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:27.251570] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:27.259046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:28.259361] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:28.266769] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:29.267153] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:29.276944] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:30.277271] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:30.283679] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:31.283973] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:31.291983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:32.292631] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:32.300717] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:33.301135] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:33.309377] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:34.309681] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:34.317985] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:35.318544] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:35.326790] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:36.327133] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:36.336598] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:37.336882] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:37.346106] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:38.346644] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:38.355783] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:39.356276] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:39.367201] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:40.367488] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:40.376290] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:41.376800] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:41.385269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:42.385735] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:42.393938] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:43.394264] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:43.402099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:44.402387] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:44.413019] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:45.413283] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:45.422031] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:46.422420] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:46.430128] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:47.430539] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:47.438625] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:48.438976] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:48.447910] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:49.448336] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:49.456062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:50.456384] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:50.469593] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:51.470078] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:51.478240] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:52.478719] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:52.492011] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:53.492501] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:53.502957] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:54.503244] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:54.516522] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:55.516984] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:55.528454] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:56.528946] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:56.537961] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:57.538280] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:57.546946] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:58.547277] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:58.554863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:59.555260] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:59.562923] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:00.563209] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:00.570962] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:01.571458] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:01.579091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:02.579536] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:02.587428] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:03.587711] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:03.595757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:04.596022] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:04.603852] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:05.604260] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:05.611726] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:06.612006] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:06.619677] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:07.619943] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:07.629453] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:08.629753] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:08.638092] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:09.638354] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:09.649671] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:10.650096] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:10.657885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:11.658245] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:11.666519] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:12.666994] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:12.674909] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:13.675292] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:13.682468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:14.682789] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:14.690217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:15.690543] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:15.699089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:16.699547] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:16.716652] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:17.717078] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:17.724705] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:18.724991] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:18.732432] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:19.732704] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:19.740392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:20.740699] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:20.748770] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:21.749061] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:21.757397] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:22.757726] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:22.766292] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:23.766682] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:23.776402] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:24.776904] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:24.790684] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:25.791006] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:25.799793] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:26.800220] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:26.808851] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:27.809368] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:27.819692] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:28.820134] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:28.830118] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:29.830513] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:29.838764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:30.839083] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:30.847925] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:31.848380] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:31.857228] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:32.857616] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:32.865759] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:33.866063] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:33.874829] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:34.875136] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:34.883040] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:35.883379] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:35.891510] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:36.891901] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:36.899540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:37.899985] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:37.908455] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:38.908931] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:38.917237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:39.917603] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:39.926090] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:40.926373] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:40.934241] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:41.934567] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:41.942431] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:42.942709] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:42.949822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:43.950167] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:43.957890] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:44.958232] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:44.965809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:45.966166] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:45.978390] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:46.978779] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:46.986622] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:47.986991] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:47.997092] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:48.997624] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:49.006690] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:50.006971] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:50.014996] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:51.015364] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:51.022942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:52.023251] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:52.038653] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:53.039062] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:53.046942] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:54.047526] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:54.055114] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:55.055376] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:55.063867] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:56.064281] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:56.072193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:57.072467] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:57.079445] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:58.079732] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:58.087032] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:59.087491] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:59.094579] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:00.094865] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:00.102963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:01.103429] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:01.111568] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:02.111836] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:02.119781] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:03.120069] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:03.127748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:04.128034] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:04.137911] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:05.138178] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:05.145668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:06.145989] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:06.153988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:07.154283] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:07.161940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:08.162283] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:08.170610] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:09.171099] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:09.178990] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:10.179391] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:10.186822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:11.187240] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:11.194702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:12.195004] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:12.202599] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:13.202944] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:13.210713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:14.211123] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:14.218406] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:15.218714] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:15.226347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:16.226790] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:16.234101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:17.234392] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:17.242501] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:18.243029] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:18.250831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:19.251362] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:19.259044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:20.259333] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:20.266652] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:21.266981] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:21.274056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:22.274364] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:22.281956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:23.282553] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:23.290050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:24.290631] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:24.297975] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:25.298289] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:25.306430] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:26.306873] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:26.315063] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:27.315534] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:27.324018] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:28.324334] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:28.332527] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:29.333065] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:29.340644] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:30.340931] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:30.348261] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:31.348722] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:31.356585] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:32.356917] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:32.365284] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:33.365837] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:33.373811] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:34.374077] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:34.381415] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:35.381887] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:35.390371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:36.390797] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:36.403075] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:37.403350] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:37.410953] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:38.411230] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:38.419162] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:39.419468] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:39.427150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:40.427486] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:40.435151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:41.435536] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:41.443102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:42.443576] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:42.451664] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:43.452009] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:43.459287] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:44.459722] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:44.467653] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:45.467925] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:45.475885] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:46.476205] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:46.483784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:47.484082] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:47.491584] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:48.491960] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:48.499870] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:49.500159] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:49.507863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:50.508187] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:50.515925] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:51.516503] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:51.526106] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:52.526387] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:52.534180] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:53.534489] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:53.541979] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:54.542415] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:54.550715] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:55.551054] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:55.559003] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:56.559517] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:56.567467] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:57.567764] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:57.575778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:58.576216] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:58.583134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:59.583662] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:59.591333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:00.591621] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:00.599472] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:01.599860] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:01.610524] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:02.610825] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:02.617756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:03.618049] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:03.626097] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:04.626405] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:04.634539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:05.635000] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:05.642644] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:06.643102] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:06.651072] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:07.651398] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:07.663263] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:08.663549] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:08.674350] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:09.674628] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:09.682104] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:10.682574] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:10.691236] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:11.691674] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:11.699477] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:12.699772] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:12.707269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:13.707618] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:13.717646] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:14.717959] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:14.726393] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:15.726846] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:15.734812] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:16.735159] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:16.744152] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:17.744615] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:17.752569] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:18.753026] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:18.760835] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:19.761181] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:19.790009] end - ✅ in 0.029s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:20.790321] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:20.797562] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:21.797900] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:21.805900] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:22.806363] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:22.813953] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:23.814404] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:23.821511] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:24.821795] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:24.829507] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:25.829792] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:25.838368] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:26.838728] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:26.846750] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:27.847004] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:27.854199] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:28.854727] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:28.862582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:29.862967] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:29.871853] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:30.872108] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:30.879922] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:31.880221] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:31.888136] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:32.888499] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:32.896195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:33.896691] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:33.904268] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:34.904619] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:34.912019] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:35.912393] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:35.920241] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:36.920567] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:36.928820] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:37.929285] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:37.937110] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:38.937410] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:38.945880] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:39.946350] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:39.953842] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:40.954134] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:40.961864] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:41.962182] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:41.973371] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:42.973863] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:42.982983] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:43.983359] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:43.990794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:44.991074] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:44.998886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:45.999155] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:46.007105] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:47.007369] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:47.029903] end - ✅ in 0.022s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:48.030201] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:48.037704] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:49.038187] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:49.045466] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:50.045914] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:50.052792] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:51.053076] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:51.060885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:52.061369] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:52.069562] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:53.069860] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:53.077145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:54.077661] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:54.086935] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:55.087263] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:55.094900] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:56.095437] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:56.102882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:57.103160] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:57.110354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:58.110827] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:58.118378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:59.118858] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:59.126658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:00.127012] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:00.133928] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:01.134231] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:01.141706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:02.142155] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:02.150182] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:03.150517] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:03.157898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:04.158250] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:04.166122] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:05.166481] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:05.175543] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:06.175881] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:06.182958] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:07.183420] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:07.191091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:08.191571] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:08.199134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:09.199602] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:09.207555] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:10.207834] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:10.215715] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:11.216015] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:11.223644] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:12.223955] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:12.231348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:13.231809] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:13.238931] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:14.239225] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:14.247760] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:15.248124] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:15.255839] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:16.256253] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:16.264036] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:17.264368] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:17.272106] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:18.272396] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:18.279652] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:19.280100] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:19.291017] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:20.291375] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:20.299721] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:21.300099] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:21.307192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:22.307610] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:22.315386] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:23.315727] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:23.323259] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:24.323720] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:24.331745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:25.332015] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:25.339460] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:26.339770] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:26.347095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:27.347423] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:27.354934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:28.355203] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:28.362935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:29.363216] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:29.371826] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:30.372194] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:30.380807] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:31.381210] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:31.388674] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:32.388991] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:32.396237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:33.396535] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:33.404170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:34.404493] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:34.411597] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:35.411932] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:35.419511] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:36.419817] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:36.427258] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:37.427745] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:37.435477] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:38.435794] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:38.443757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:39.444046] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:39.452783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:40.453062] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:40.460777] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:41.461068] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:41.468799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:42.469094] start - args=(, 'autoscale-hpa-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:42.477619] end - ✅ in 0.008s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T19:35:42.477882] end - ❌ 900.086s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T19:35:42.478262] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-de-ec1dce8b'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-38916baa'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-deplo-347a3180'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T19:35:42.498779] end - ✅ in 0.020s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_hpa_deployment] [2026-04-24T19:35:42.498971] end - ❌ 900.157s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:20:46Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_inference_service[router-with-refs-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] _ [e2e-llm-inference-service] [gw1] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-with-refs-pd', 'scheduler-managed', 'workload-pd-cpu', 'model-fb-opt-125m'], prompt='You a... {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.asyncio(loop_scope="session") [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-with-gateway-ref", [e2e-llm-inference-service] "router-with-managed-route", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] endpoint="/v1/completions", [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] payload_formatter=completions_payload, [e2e-llm-inference-service] response_assertion=create_response_assertion(with_field="choices"), [e2e-llm-inference-service] before_test=[ [e2e-llm-inference-service] lambda: create_router_resources( [e2e-llm-inference-service] gateways=[ROUTER_GATEWAYS[0]], [e2e-llm-inference-service] ) [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] payload_formatter=completions_payload, [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-custom-route-timeout", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="custom-route-timeout-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-with-refs", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="router-with-refs-test", [e2e-llm-inference-service] before_test=[ [e2e-llm-inference-service] lambda: create_router_resources( [e2e-llm-inference-service] gateways=[ROUTER_GATEWAYS[0]], [e2e-llm-inference-service] routes=[ROUTER_ROUTES[0], ROUTER_ROUTES[1]], [e2e-llm-inference-service] ) [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=["router-managed", "workload-pd-cpu", "model-fb-opt-125m"], [e2e-llm-inference-service] prompt="You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. " [e2e-llm-inference-service] "Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. " [e2e-llm-inference-service] "Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.", [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-custom-route-timeout-pd", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-pd-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. " [e2e-llm-inference-service] "Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. " [e2e-llm-inference-service] "Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.", [e2e-llm-inference-service] service_name="custom-route-timeout-pd-test", [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-with-refs-pd", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-pd-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. " [e2e-llm-inference-service] "Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. " [e2e-llm-inference-service] "Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.", [e2e-llm-inference-service] service_name="router-with-refs-pd-test", [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] before_test=[ [e2e-llm-inference-service] lambda: create_router_resources( [e2e-llm-inference-service] gateways=[ROUTER_GATEWAYS[1]], [e2e-llm-inference-service] routes=[ROUTER_ROUTES[2], ROUTER_ROUTES[3]], [e2e-llm-inference-service] ) [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-dp-ep-gpu", [e2e-llm-inference-service] "workload-dp-ep-prefill-gpu", [e2e-llm-inference-service] "model-deepseek-v2-lite", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="Delve into the multifaceted implications of a fully disaggregated cloud architecture, specifically " [e2e-llm-inference-service] "where the compute plane (P) and the data plane (D) are independently deployed and managed for a " [e2e-llm-inference-service] "geographically distributed, high-throughput, low-latency microservices ecosystem. Beyond the " [e2e-llm-inference-service] "fundamental challenges of network latency and data consistency, elaborate on the advanced " [e2e-llm-inference-service] "considerations and trade-offs inherent in such a setup: 1. Network Architecture and Protocols: " [e2e-llm-inference-service] "How would the network fabric and underlying protocols (e.g., RDMA, custom transport layers) need to " [e2e-llm-inference-service] "evolve to support optimal performance and minimize inter-plane communication overhead, especially for " [e2e-llm-inference-service] "synchronous operations? Discuss the role of network programmability (e.g., SDN, P4) in dynamically " [e2e-llm-inference-service] "optimizing routing and traffic flow between P and D. 2. Advanced Data Consistency and Durability: " [e2e-llm-inference-service] "Explore sophisticated data consistency models (e.g., causal consistency, strong eventual consistency) " [e2e-llm-inference-service] "and their applicability in balancing performance and data integrity across a globally distributed data plane. " [e2e-llm-inference-service] "Detail strategies for ensuring data durability and fault tolerance, including multi-region replication, " [e2e-llm-inference-service] "intelligent partitioning, and recovery mechanisms in the event of partial or full plane failures. " [e2e-llm-inference-service] "3. Dynamic Resource Orchestration and Cost Optimization: Analyze how an orchestration layer would intelligently " [e2e-llm-inference-service] "manage the independent scaling of compute (P) and data (D) resources, considering fluctuating workloads, " [e2e-llm-inference-service] "cost efficiency, and performance targets (e.g., using predictive analytics for resource provisioning). " [e2e-llm-inference-service] "Discuss mechanisms for dynamically reallocating compute nodes to different data partitions based on " [e2e-llm-inference-service] "workload patterns and data locality, potentially involving live migration strategies. " [e2e-llm-inference-service] "4. Security and Compliance in a Distributed Landscape: Address the enhanced security perimeter " [e2e-llm-inference-service] "challenges, including securing communication channels between P and D (encryption in transit, mutual TLS), " [e2e-llm-inference-service] "fine-grained access control to data at rest and in motion, and identity management across disaggregated " [e2e-llm-inference-service] "components. Discuss how such an architecture impacts compliance with regulatory frameworks (e.g., GDPR, HIPAA) " [e2e-llm-inference-service] "concerning data sovereignty, privacy, and auditability. 5. Operational Complexity and Observability: " [e2e-llm-inference-service] "Examine the increased complexity in monitoring, logging, and tracing across highly decoupled compute and " [e2e-llm-inference-service] "data planes. What specialized tooling and practices (e.g., distributed tracing with OpenTelemetry, advanced AIOps) " [e2e-llm-inference-service] "would be essential? How would incident response and troubleshooting differ in this disaggregated environment " [e2e-llm-inference-service] "compared to traditional integrated systems? Consider the challenges of pinpointing root causes across " [e2e-llm-inference-service] "independent failures. 6. Real-world Applicability and Future Trends: Identify specific industries " [e2e-llm-inference-service] "or use cases (e.g., high-frequency trading, IoT edge processing, large language model inference) " [e2e-llm-inference-service] "where the benefits of P/D disaggregation would strongly outweigh its complexities. " [e2e-llm-inference-service] "Conclude by speculating on emerging technologies or paradigms (e.g., serverless compute functions " [e2e-llm-inference-service] "directly interacting with object storage, in-memory disaggregation) that could further drive or " [e2e-llm-inference-service] "transform P/D disaggregation in cloud computing.", [e2e-llm-inference-service] max_tokens=2000, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_gpu, [e2e-llm-inference-service] pytest.mark.cluster_nvidia, [e2e-llm-inference-service] pytest.mark.cluster_nvidia_roce, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-no-scheduler", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="What is KServe?", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.no_scheduler, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-simulated-dp-ep-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="This test simulates DP+EP that can run on CPU, the idea is to test the LWS-based deployment, " [e2e-llm-inference-service] "but without the resources requirements for DP+EP (GPUs and ROCe/IB).", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_multi_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] # Scheduler config tests [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-inline-config", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="scheduler-inline-config-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-configmap-ref", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="scheduler-configmap-ref-test", [e2e-llm-inference-service] before_test=[create_scheduler_configmap], [e2e-llm-inference-service] after_test=[delete_scheduler_configmap], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-replicas", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="scheduler-ha-replicas-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] # Precise prefix KV cache routing test [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-precise-prefix-cache-inline-config", [e2e-llm-inference-service] "workload-llmd-simulator-kvcache", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="precise-prefix-cache-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_inference_service(test_case: TestCase): # noqa: F811 [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = KServeClient( [e2e-llm-inference-service] config_file=os.environ.get("KUBECONFIG", "~/.kube/config"), [e2e-llm-inference-service] client_configuration=client.Configuration(), [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] if not test_case.llm_service.metadata.annotations: [e2e-llm-inference-service] test_case.llm_service.metadata.annotations = {} [e2e-llm-inference-service] [e2e-llm-inference-service] test_case.llm_service.metadata.annotations[ [e2e-llm-inference-service] "security.opendatahub.io/enable-auth" [e2e-llm-inference-service] ] = "false" [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:410: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...h-ref-d1f07093'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T19:26:19.533341', start_time = 1777058779.533605 [e2e-llm-inference-service] duration = 900.3854250907898, timestamp_end = '2026-04-24T19:41:19.919031' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security....er-with-ref-d1f07093'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f60bf043420> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:34 Checking Gateway router-gateway-2 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:62 Resource not found, creating Gateway router-gateway-2 [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:70 ✓ Successfully created Gateway router-gateway-2 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1251 ✓ Created/updated Gateway router-gateway-2 [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:121 Checking HttpRoute router-route-3 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:149 Resource not found, creating HttpRoute router-route-3 [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:157 ✓ Successfully created HttpRoute router-route-3 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1260 ✓ Created/updated HTTPRoute router-route-3 [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:121 Checking HttpRoute router-route-4 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:149 Resource not found, creating HttpRoute router-route-4 [e2e-llm-inference-service] INFO kserve.trace:gw_api.py:157 ✓ Successfully created HttpRoute router-route-4 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1260 ✓ Created/updated HTTPRoute router-route-4 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-with-refs-pd-router-with-c2ec731e in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-with-refs-pd-router-with-c2ec731e [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-with-refs-pd-router-with-c2ec731e [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scheduler-managed-router-with-r-57d1c131 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scheduler-managed-router-with-r-57d1c131 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scheduler-managed-router-with-r-57d1c131 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-pd-cpu-router-with-ref-d1f07093 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-pd-cpu-router-with-ref-d1f07093 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-pd-cpu-router-with-ref-d1f07093 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig model-fb-opt-125m-router-with-r-c22ea8a0 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig model-fb-opt-125m-router-with-r-c22ea8a0 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig model-fb-opt-125m-router-with-r-c22ea8a0 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_inference_service] [2026-04-24T19:26:19.135040] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-with-refs-pd', 'scheduler-managed', 'workload-pd-cpu', 'model-fb-opt-125m'], prompt='You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.', service_name='router-with-refs-pd-test', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[ at 0x7f60bfffae80>], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'router-with-refs-pd-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-with-refs-pd-router-with-c2ec731e'}, [e2e-llm-inference-service] {'name': 'scheduler-managed-router-with-r-57d1c131'}, [e2e-llm-inference-service] {'name': 'workload-pd-cpu-router-with-ref-d1f07093'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T19:26:19.147905] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'router-with-refs-pd-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-with-refs-pd-router-with-c2ec731e'}, [e2e-llm-inference-service] {'name': 'scheduler-managed-router-with-r-57d1c131'}, [e2e-llm-inference-service] {'name': 'workload-pd-cpu-router-with-ref-d1f07093'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T19:26:19.533209] end - ✅ in 0.385s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T19:26:19.533341] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'router-with-refs-pd-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-with-refs-pd-router-with-c2ec731e'}, [e2e-llm-inference-service] {'name': 'scheduler-managed-router-with-r-57d1c131'}, [e2e-llm-inference-service] {'name': 'workload-pd-cpu-router-with-ref-d1f07093'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:19.533610] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:19.626802] end - ✅ in 0.093s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:20.627201] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:20.727398] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:21.727677] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:21.826957] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:22.827234] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:22.927320] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:23.927591] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:24.027748] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:25.028195] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:25.127149] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:26.127605] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:26.134255] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:27.134493] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:27.226988] end - ✅ in 0.092s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:28.227266] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:28.234896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:29.235180] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:29.243176] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:30.243445] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:30.328077] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:31.328404] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:31.336168] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:32.336494] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:32.344461] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/router-route-3: "False" (reason "BackendNotFound", message "backend(router-with-refs-pd-test-inference-pool-ip-4ae47a7d.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:33.344800] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:33.354135] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:34.354423] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:34.365029] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:35.365270] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:35.373187] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:36.373648] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:36.381250] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:37.381639] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:37.390538] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:38.390821] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:38.398516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:39.398816] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:39.407020] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:40.407349] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:40.416508] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:41.416972] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:41.425546] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:42.425904] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:42.434248] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:43.434739] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:43.442587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:44.442834] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:44.450263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:45.450526] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:45.458652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:46.458915] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:46.467544] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:47.467932] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:47.477127] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:48.477464] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:48.485488] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:49.485876] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:49.494047] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:50.494474] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:50.502026] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:51.502319] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:51.509651] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:52.509953] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:52.517673] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:53.517932] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:53.525192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:54.525715] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:54.534234] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:55.534684] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:55.542398] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:56.542782] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:56.550764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:57.551096] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:57.558702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:58.558985] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:58.567269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:26:59.567583] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:26:59.575719] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:00.576085] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:00.583600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:01.583919] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:01.591878] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:02.592349] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:02.600387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:03.600651] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:03.609074] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:04.609683] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:04.618706] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:05.619090] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:05.626686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:06.626929] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:06.634708] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:07.634987] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:07.642891] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:08.643138] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:08.654362] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:09.654754] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:09.662755] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:10.663006] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:10.670349] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:11.670712] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:11.678182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:12.678455] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:12.686004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:13.686242] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:13.694198] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:14.694461] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:14.701895] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:15.702149] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:15.709473] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:16.709815] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:16.720891] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:17.721371] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:17.729333] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:18.729667] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:18.737837] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:19.738080] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:19.745778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:20.746158] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:20.754678] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:21.754917] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:21.762147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:22.762467] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:22.770216] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:23.770658] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:23.778558] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:24.778940] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:24.786919] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:25.787162] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:25.794339] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:26.794589] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:26.802579] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:27.802998] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:27.810620] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:28.810899] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:28.818811] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:29.819084] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:29.826719] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:30.826997] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:30.834404] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:31.834689] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:31.842278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:32.842616] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:32.850683] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:33.850968] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:33.859141] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:34.859676] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:34.867186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:35.867454] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:35.875986] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:36.876265] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:36.884808] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:37.885143] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:37.893074] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:38.893506] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:38.901179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:39.901553] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:39.908970] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:40.909250] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:40.917970] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:41.918312] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:41.926889] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:42.927145] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:42.937941] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:43.938186] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:43.946139] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:44.946429] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:45.059572] end - ✅ in 0.113s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:46.060180] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:46.068460] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:47.068745] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:47.076910] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:48.077231] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:48.085621] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:49.085905] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:49.093115] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:50.093501] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:50.102365] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:51.102749] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:51.110897] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:52.111172] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:52.118833] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:53.119085] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:53.126951] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:54.127282] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:54.135155] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:55.135469] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:55.143715] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:56.144140] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:56.152820] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:57.153401] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:57.160881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:58.161284] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:58.169823] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:27:59.170231] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:27:59.178237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:00.178775] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:00.186895] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:01.187162] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:01.195118] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:02.195431] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:02.204340] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:03.204777] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:03.212951] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:04.213349] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:04.221452] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:05.221794] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:05.229505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:06.229883] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:06.237735] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:07.238099] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:07.245571] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:08.245874] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:08.253538] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:09.253985] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:09.261708] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:10.262030] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:10.269780] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:11.270089] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:11.288611] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:12.288961] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:12.297085] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:13.297366] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:13.304873] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:14.305209] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:14.313372] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:15.313669] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:15.322295] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:16.322919] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:16.337641] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:17.338128] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:17.354707] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:18.354978] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:18.362256] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:19.362590] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:19.371591] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:20.371920] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:20.379673] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:21.380003] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:21.388343] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:22.388780] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:22.397705] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:23.398210] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:23.406130] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:24.406392] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:24.413962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:25.414291] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:25.422638] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:26.422950] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:26.431217] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:27.431549] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:27.440837] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:28.441337] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:28.448761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:29.449009] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:29.457850] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:30.458130] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:30.465664] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:31.465956] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:31.473969] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:32.474259] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:32.487799] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:33.488128] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:33.496478] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:34.496847] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:34.504956] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:35.505265] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:35.513645] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:36.514024] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:36.522078] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:37.522529] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:37.530542] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:38.530820] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:38.539150] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:39.539426] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:39.547349] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:40.547687] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:40.556182] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:41.556771] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:41.567373] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:42.567683] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:42.575378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:43.575657] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:43.584209] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:44.584600] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:44.592560] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:45.592836] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:45.600652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:46.600926] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:46.609480] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:47.609863] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:47.618386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:48.618709] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:48.626334] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:49.626630] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:49.635531] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:50.635941] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:50.643394] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:51.643708] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:51.651778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:52.652212] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:52.660518] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:53.660972] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:53.673678] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:54.673991] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:54.685338] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:55.685645] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:55.695207] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:56.695556] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:56.703926] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:57.704398] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:57.712292] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:58.712627] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:58.720392] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:28:59.720694] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:28:59.728629] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:00.728921] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:00.737281] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:01.737814] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:01.747330] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:02.747628] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:02.755606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:03.755988] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:03.764390] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:04.764870] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:04.772908] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:05.773188] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:05.781411] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:06.781972] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:06.790165] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:07.790527] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:07.798097] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:08.798469] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:08.806433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:09.806767] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:09.814129] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:10.814515] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:10.822713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:11.823040] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:11.830344] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:12.830668] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:12.839824] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:13.840162] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:13.848995] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:14.849330] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:14.857633] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:15.857971] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:15.865684] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:16.865998] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:16.873865] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:17.874142] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:17.881886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:18.882170] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:18.890143] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:19.890482] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:19.898127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:20.898499] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:20.906460] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:21.906790] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:21.915172] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:22.915513] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:22.923514] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:23.923887] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:23.932047] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:24.932529] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:24.940572] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:25.940886] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:25.949075] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:26.949385] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:26.957264] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:27.957678] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:27.965900] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:28.966349] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:28.974029] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:29.974405] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:29.982850] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:30.983338] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:30.991531] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:31.991936] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:32.000026] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:33.000348] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:33.010367] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:34.011571] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:34.025244] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:35.025738] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:35.034016] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:36.034284] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:36.041877] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:37.042253] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:37.049986] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:38.050381] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:38.057967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:39.058389] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:39.066477] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:40.066879] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:40.075239] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:41.075644] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:41.084039] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:42.084575] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:42.092142] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:43.092621] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:43.103231] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:44.103770] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:44.113120] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:45.113636] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:45.121119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:46.121472] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:46.129960] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:47.130384] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:47.140784] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:48.141128] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:48.149375] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:49.149813] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:49.159338] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:50.159685] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:50.168157] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:51.168498] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:51.176354] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:52.176723] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:52.185254] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:53.185600] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:53.193919] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:54.194408] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:54.203168] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:55.203681] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:55.212001] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:56.212526] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:56.220524] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:57.220815] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:57.229631] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:58.230137] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:58.238438] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:29:59.238973] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:29:59.247199] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:00.247544] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:00.255877] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:01.256200] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:01.263693] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:02.263959] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:02.271431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:03.271724] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:03.282158] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:04.282441] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:04.295133] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:05.295480] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:05.303285] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:06.303587] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:06.311983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:07.312395] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:07.322483] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:08.322746] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:08.334260] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:09.334800] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:09.341970] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:10.342361] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:10.351604] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:11.351884] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:11.360512] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:12.360822] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:12.368913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:13.369196] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:13.377545] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:14.377995] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:14.386182] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:15.386730] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:15.395544] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:16.396002] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:16.404700] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:17.405101] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:17.413175] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:18.413457] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:18.421769] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:19.422375] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:19.431898] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:20.432233] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:20.440113] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:21.440485] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:21.448891] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:22.449372] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:22.457884] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:23.458329] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:23.466068] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:24.466545] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:24.475077] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:25.475378] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:25.483665] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:26.484000] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:26.492352] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:27.492821] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:27.500966] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:28.501553] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:28.508706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:29.509041] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:29.517454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:30.517953] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:30.525426] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:31.525753] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:31.533673] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:32.533972] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:32.541806] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:33.542120] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:33.550232] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:34.550571] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:34.559397] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:35.559712] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:35.567230] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:36.567517] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:36.575146] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:37.575848] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:37.583520] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:38.583826] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:38.591683] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:39.591989] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:39.602447] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:40.602739] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:40.615416] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:41.615794] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:41.623355] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:42.623650] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:42.633583] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:43.633922] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:43.641942] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:44.642243] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:44.649790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:45.650187] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:45.659586] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:46.660163] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:46.668359] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:47.668763] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:47.675820] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:48.676162] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:48.687007] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:49.687310] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:49.697443] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:50.697775] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:50.706551] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:51.706842] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:51.715011] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:52.715363] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:52.724059] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:53.724609] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:53.732611] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:54.732874] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:54.741279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:55.741897] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:55.750371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:56.750849] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:56.758821] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:57.759108] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:57.767185] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:58.767560] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:58.775273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:30:59.775810] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:30:59.783806] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:00.784175] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:00.792212] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:01.792556] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:01.800234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:02.800570] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:02.809369] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:03.809684] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:03.817582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:04.818118] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:04.826097] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:05.826589] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:05.834702] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:06.835017] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:06.842870] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:07.843196] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:07.850541] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:08.850856] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:08.859090] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:09.859410] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:09.867640] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:10.867984] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:10.875952] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:11.876267] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:11.884172] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:12.884694] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:12.895039] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:13.895391] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:13.903452] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:14.903921] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:14.912784] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:15.913227] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:15.922155] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:16.922744] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:16.931108] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:17.931419] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:17.940970] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:18.941498] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:18.950385] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:19.950652] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:19.960165] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:20.960639] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:20.969032] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:21.969349] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:21.977434] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:22.979037] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:22.987364] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:23.987875] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:23.996258] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:24.996808] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:25.004485] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:26.004837] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:26.013215] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:27.013772] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:27.022489] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:28.022951] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:28.031114] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:29.031581] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:29.040062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:30.040671] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:30.049219] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:31.049569] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:31.057854] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:32.058364] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:32.066361] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:33.066901] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:33.074783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:34.075118] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:34.082898] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:35.083218] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:35.093250] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:36.093702] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:36.102662] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:37.102957] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:37.111697] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:38.112018] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:38.124893] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:39.125232] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:39.134084] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:40.134652] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:40.148887] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:41.149240] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:41.158950] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:42.159241] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:42.167899] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:43.168227] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:43.176345] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:44.176674] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:44.188250] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:45.188570] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:45.200237] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:46.200564] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:46.209449] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:47.209763] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:47.222884] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:48.223167] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:48.231360] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:49.231649] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:49.240828] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:50.241394] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:50.251803] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:51.252201] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:51.261105] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:52.261538] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:52.270279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:53.270621] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:53.280911] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:54.281158] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:54.290852] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:55.291153] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:55.301934] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:56.302211] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:56.313035] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:57.313370] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:57.322190] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:58.322519] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:58.340909] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:31:59.341672] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:31:59.350271] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:00.350623] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:00.358979] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:01.359427] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:01.367538] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:02.367849] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:02.375551] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:03.375911] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:03.385511] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:04.385822] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:04.394937] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:05.395235] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:05.404271] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:06.404638] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:06.412943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:07.413408] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:07.422218] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:08.422893] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:08.432226] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:09.432743] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:09.441748] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:10.442111] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:10.450123] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:11.450406] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:11.458848] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:12.459209] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:12.467090] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:13.467409] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:13.474872] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:14.475207] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:14.483770] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:15.484220] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:15.493049] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:16.493469] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:16.504822] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:17.505198] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:17.520476] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:18.520838] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:18.528669] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:19.528985] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:19.537553] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:20.538102] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:20.546840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:21.547137] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:21.555163] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:22.555494] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:22.563381] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:23.563666] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:23.574025] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:24.574339] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:24.584813] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:25.585101] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:25.593860] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:26.594282] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:26.605932] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:27.606367] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:27.617528] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:28.617944] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:28.626376] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:29.626629] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:29.637620] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:30.638085] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:30.648682] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:31.648993] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:31.657274] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:32.657708] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:32.667443] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:33.667816] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:33.675520] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:34.675833] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:34.684167] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:35.684846] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:35.693546] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:36.693981] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:36.702295] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:37.702789] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:37.710545] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:38.710862] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:38.719021] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:39.719325] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:39.727735] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:40.728263] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:40.737550] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:41.737974] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:41.746247] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:42.746574] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:42.754171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:43.754461] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:43.762533] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:44.762969] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:44.772509] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:45.772808] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:45.780551] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:46.780834] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:46.789662] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:47.789947] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:47.798734] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:48.799017] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:48.808747] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:49.809023] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:49.818701] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:50.819017] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:50.827448] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:51.827919] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:51.839193] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:52.839543] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:52.849904] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:53.850562] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:53.858740] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:54.859052] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:54.867931] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:55.868289] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:55.876569] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:56.876920] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:56.884924] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:57.885405] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:57.893411] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:58.893716] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:58.902025] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:32:59.902536] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:32:59.911404] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:00.911716] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:00.919749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:01.920165] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:01.927937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:02.928345] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:02.936412] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:03.936959] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:03.945411] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:04.945949] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:04.954064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:05.954512] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:05.962084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:06.962400] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:06.970389] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:07.970694] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:07.978782] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:08.979099] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:08.986618] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:09.986943] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:09.995194] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:10.995488] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:11.003368] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:12.004496] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:12.012222] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:13.012574] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:13.020438] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:14.020727] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:14.029387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:15.029750] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:15.037265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:16.037616] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:16.045745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:17.046287] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:17.054912] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:18.055188] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:18.063087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:19.063350] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:19.071504] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:20.072003] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:20.081105] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:21.081506] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:21.089001] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:22.089291] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:22.098483] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:23.098768] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:23.106609] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:24.106922] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:24.115456] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:25.115877] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:25.124027] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:26.124460] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:26.133668] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:27.134059] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:27.145349] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:28.145708] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:28.154966] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:29.155327] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:29.166544] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:30.167106] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:30.174717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:31.174995] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:31.182844] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:32.183173] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:32.190756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:33.191045] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:33.199959] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:34.200338] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:34.209293] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:35.209760] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:35.217844] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:36.218116] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:36.226358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:37.226625] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:37.235408] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:38.235800] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:38.242929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:39.243283] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:39.251176] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:40.251645] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:40.258955] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:41.259242] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:41.267563] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:42.267886] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:42.279117] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:43.279559] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:43.287586] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:44.287882] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:44.295736] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:45.296025] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:45.303460] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:46.303770] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:46.311982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:47.312386] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:47.320705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:48.321115] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:48.329082] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:49.329385] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:49.338281] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:50.338888] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:50.346417] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:51.346725] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:51.354727] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:52.355048] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:52.362935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:53.363255] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:53.371494] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:54.371904] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:54.379486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:55.379926] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:55.388244] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:56.388613] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:56.396640] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:57.397059] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:57.407888] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:58.408375] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:58.416429] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:33:59.416822] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:33:59.425638] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:00.425915] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:00.433998] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:01.434355] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:01.442133] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:02.442556] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:02.451329] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:03.451702] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:03.460693] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:04.461131] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:04.469600] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:05.470073] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:05.479451] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:06.479854] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:06.487827] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:07.488256] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:07.497386] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:08.497707] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:08.508621] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:09.509022] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:09.517463] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:10.517764] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:10.526532] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:11.526828] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:11.535151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:12.535641] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:12.544483] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:13.544840] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:13.553590] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:14.553906] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:14.562932] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:15.563342] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:15.571428] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:16.571901] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:16.582832] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:17.583087] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:17.591035] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:18.591338] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:18.599885] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:19.600223] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:19.608268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:20.608662] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:20.616797] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:21.617291] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:21.625275] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:22.625764] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:22.633273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:23.633548] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:23.641094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:24.641610] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:24.648986] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:25.649333] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:25.657164] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:26.657662] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:26.665923] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:27.666365] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:27.674323] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:28.674638] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:28.681952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:29.682263] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:29.689915] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:30.690201] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:30.698959] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:31.699243] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:31.706999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:32.707283] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:32.714848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:33.715236] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:33.724509] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:34.724966] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:34.732752] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:35.733069] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:35.741575] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:36.741827] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:36.749431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:37.749759] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:37.758488] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:38.758954] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:38.767071] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:39.767521] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:39.775424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:40.775719] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:40.784030] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:41.784547] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:41.796084] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:42.796527] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:42.809897] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:43.810180] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:43.817665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:44.817954] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:44.824716] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:45.825017] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:45.832889] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:46.833175] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:46.841776] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:47.842134] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:47.852375] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:48.852773] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:48.860045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:49.860379] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:49.868044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:50.868395] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:50.875497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:51.875982] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:51.883360] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:52.883653] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:52.891634] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:53.891951] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:53.899871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:54.900362] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:54.908343] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:55.908659] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:55.917156] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:56.917732] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:56.926105] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:57.926445] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:57.934499] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:58.934892] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:58.943214] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:34:59.943602] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:34:59.955507] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:00.956715] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:00.964469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:01.964777] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:01.973035] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:02.973641] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:02.981839] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:03.982290] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:03.990337] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:04.990633] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:04.997779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:05.998089] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:06.006213] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:07.006564] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:07.014545] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:08.014851] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:08.022556] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:09.022824] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:09.030165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:10.030514] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:10.037754] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:11.038036] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:11.046252] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:12.046804] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:12.055417] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:13.055886] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:13.063244] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:14.063553] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:14.071592] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:15.072014] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:15.079203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:16.079624] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:16.087167] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:17.087489] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:17.095583] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:18.095865] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:18.103318] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:19.103673] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:19.111972] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:20.112278] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:20.123997] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:21.124344] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:21.132400] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:22.132691] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:22.142152] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:23.142477] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:23.150651] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:24.151149] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:24.159538] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:25.159801] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:25.167603] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:26.167927] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:26.175710] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:27.176009] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:27.184405] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:28.184926] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:28.192408] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:29.192888] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:29.200855] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:30.201195] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:30.209459] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:31.209881] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:31.217910] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:32.218370] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:32.226062] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:33.226479] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:33.234626] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:34.234918] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:34.243707] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:35.244102] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:35.251173] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:36.251497] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:36.258856] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:37.259229] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:37.267469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:38.267786] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:38.275406] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:39.275728] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:39.283580] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:40.283881] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:40.292054] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:41.292511] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:41.300289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:42.300785] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:42.309184] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:43.309665] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:43.326799] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:44.327092] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:44.527845] end - ✅ in 0.201s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:45.528216] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:45.536825] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:46.537359] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:46.545528] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:47.545903] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:47.553551] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:48.553957] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:48.627442] end - ✅ in 0.073s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:49.627746] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:49.636487] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:50.636781] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:50.728082] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:51.728671] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:51.736865] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:52.737334] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:52.826941] end - ✅ in 0.089s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:53.827238] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:53.834697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:54.834981] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:54.843709] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:55.844081] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:55.858604] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:56.859100] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:56.866788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:57.867121] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:57.879231] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:58.879708] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:58.887191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:59.887483] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:59.895140] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:00.895406] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:00.903029] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:01.903403] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:01.911000] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:02.911416] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:02.920731] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:03.921151] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:03.928863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:04.929211] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:04.937227] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:05.937694] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:05.946222] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:06.946679] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:07.026844] end - ✅ in 0.080s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:08.027106] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:08.127548] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:09.127878] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:09.135672] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:10.135974] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:10.227273] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:11.227624] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:11.235577] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:12.235905] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:12.243912] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:13.244209] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:13.252065] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:14.252410] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:14.259819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:15.260167] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:15.268350] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:16.268621] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:16.275729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:17.276028] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:17.283718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:18.284036] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:18.291845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:19.292134] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:19.300171] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:20.300745] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:20.308663] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:21.309015] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:21.316785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:22.317076] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:22.325144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:23.325536] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:23.334715] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:24.334990] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:24.344002] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:25.344335] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:25.356281] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:26.356741] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:26.364924] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:27.365212] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:27.372997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:28.373253] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:28.380905] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:29.381255] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:29.395728] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:30.396203] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:30.404666] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:31.405070] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:31.413160] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:32.413604] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:32.422261] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:33.422656] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:33.430832] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:34.431134] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:34.438774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:35.439062] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:35.450647] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:36.450921] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:36.462186] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:37.462542] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:37.470060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:38.470504] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:38.478588] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:39.478908] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:39.487093] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:40.487532] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:40.495197] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:41.495582] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:41.503690] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:42.504009] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:42.512707] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:43.513212] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:43.521282] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:44.521568] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:44.530189] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:45.530526] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:45.538385] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:46.538666] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:46.546066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:47.546502] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:47.555820] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:48.556290] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:48.564042] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:49.564600] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:49.572716] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:50.573812] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:50.581725] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:51.582243] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:51.589909] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:52.590201] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:52.598155] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:53.598637] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:53.606295] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:54.606815] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:54.618091] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:55.618395] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:55.626657] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:56.627107] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:56.635401] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:57.635818] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:57.643748] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:58.644188] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:58.652272] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:59.652630] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:59.661427] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:00.661722] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:00.669863] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:01.670355] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:01.678983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:02.679292] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:02.686574] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:03.686822] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:03.694343] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:04.694681] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:04.702119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:05.702626] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:05.710765] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:06.711052] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:06.719478] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:07.719932] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:07.728108] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:08.728663] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:08.736497] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:09.736963] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:09.748109] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:10.748373] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:10.761379] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:11.761802] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:11.770115] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:12.770433] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:12.778525] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:13.778858] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:13.786608] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:14.786914] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:14.795784] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:15.796240] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:15.804454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:16.804720] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:16.813545] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:17.813904] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:17.822519] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:18.822830] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:18.830393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:19.830819] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:19.839897] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:20.840212] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:20.848594] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:21.849121] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:21.857155] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:22.857612] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:22.865745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:23.866083] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:23.876167] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:24.876624] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:24.884194] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:25.884562] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:25.893052] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:26.893722] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:26.902045] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:27.902490] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:27.910560] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:28.910851] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:28.918429] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:29.918726] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:29.926573] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:30.926882] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:30.934701] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:31.935086] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:31.945418] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:32.945992] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:32.953817] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:33.954123] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:33.961930] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:34.962385] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:34.970451] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:35.970895] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:35.978776] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:36.979070] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:36.987107] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:37.987490] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:37.995293] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:38.995743] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:39.004181] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:40.004511] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:40.012893] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:41.013375] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:41.021747] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:42.022158] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:42.030184] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:43.030530] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:43.038620] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:44.038923] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:44.047524] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:45.047856] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:45.058884] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:46.059161] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:46.066982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:47.067289] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:47.075218] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:48.075574] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:48.083386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:49.083861] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:49.092073] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:50.092607] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:50.100888] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:51.101135] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:51.108660] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:52.109066] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:52.117622] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:53.118071] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:53.126710] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:54.127223] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:54.135322] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:55.135628] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:55.143409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:56.143708] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:56.151851] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:57.152342] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:57.160679] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:58.161057] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:58.168552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:59.168831] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:59.176503] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:00.176778] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:00.184357] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:01.184670] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:01.192455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:02.192947] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:02.202771] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:03.203123] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:03.211652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:04.212036] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:04.220006] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:05.220357] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:05.228923] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:06.229481] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:06.237234] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:07.237713] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:07.246054] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:08.246406] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:08.254079] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:09.254644] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:09.262743] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:10.263038] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:10.270614] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:11.270918] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:11.278353] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:12.278703] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:12.286184] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:13.286627] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:13.294489] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:14.294934] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:14.302977] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:15.303279] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:15.311676] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:16.311932] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:16.320228] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:17.320727] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:17.330062] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:18.330519] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:18.338374] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:19.338840] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:19.347641] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:20.348055] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:20.355020] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:21.355431] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:21.363293] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:22.363630] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:22.371876] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:23.372291] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:23.380747] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:24.381213] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:24.389329] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:25.389623] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:25.401134] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:26.401618] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:26.414133] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:27.414673] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:27.422825] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:28.423357] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:28.432955] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:29.433376] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:29.443481] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:30.443803] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:30.450572] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:31.450862] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:31.463361] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:32.463643] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:32.473538] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:33.474023] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:33.482768] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:34.483101] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:34.491053] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:35.491359] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:35.504571] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:36.505008] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:36.513099] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:37.513584] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:37.521783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:38.522233] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:38.531400] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:39.532647] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:39.541460] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:40.541773] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:40.550099] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:41.550552] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:41.558291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:42.558719] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:42.567181] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:43.567783] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:43.576395] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:44.576821] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:44.588090] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:45.588584] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:45.596773] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:46.597065] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:46.605262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:47.605597] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:47.613530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:48.614122] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:48.622561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:49.623027] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:49.630535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:50.630849] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:50.638725] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:51.639064] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:51.647326] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:52.647595] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:52.655972] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:53.656374] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:53.664386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:54.664739] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:54.674494] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:55.674808] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:55.682883] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:56.683165] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:56.690220] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:57.690607] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:57.698787] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:58.699054] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:58.706909] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:59.707139] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:59.714669] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:00.714982] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:00.722885] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:01.723171] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:01.731607] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:02.732059] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:02.740156] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:03.740653] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:03.748658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:04.749054] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:04.760074] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:05.760383] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:05.768821] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:06.769358] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:06.777394] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:07.777723] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:07.785658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:08.785978] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:08.794033] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:09.794539] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:09.802722] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:10.803131] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:10.811207] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:11.811689] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:11.819091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:12.819404] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:12.827684] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:13.827994] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:13.836678] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:14.837221] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:14.845279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:15.845816] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:15.853801] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:16.854140] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:16.861739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:17.862184] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:17.870654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:18.871136] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:18.879137] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:19.879617] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:19.887954] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:20.888343] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:20.896291] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:21.896563] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:21.904351] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:22.904666] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:22.913908] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:23.914521] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:23.922988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:24.923339] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:24.930744] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:25.931049] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:25.941028] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:26.941496] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:26.949517] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:27.949930] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:27.959358] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:28.959810] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:28.967477] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:29.967783] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:29.976023] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:30.976283] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:30.985478] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:31.985906] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:31.993394] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:32.993827] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:33.001920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:34.002263] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:34.010093] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:35.010384] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:35.018327] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:36.018629] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:36.026582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:37.026979] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:37.034634] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:38.034924] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:38.043155] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:39.043675] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:39.051539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:40.051830] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:40.059966] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:41.060373] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:41.069486] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:42.069922] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:42.078436] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:43.078936] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:43.086705] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:44.087048] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:44.095827] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:45.096149] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:45.104588] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:46.104867] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:46.115825] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:47.116107] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:47.124653] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:48.124969] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:48.133070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:49.133467] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:49.142464] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:50.142920] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:50.151015] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:51.151364] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:51.159519] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:52.159886] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:52.168421] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:53.168708] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:53.176558] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:54.176848] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:54.186119] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:55.186427] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:55.194480] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:56.194862] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:56.202757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:57.203042] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:57.211069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:58.211550] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:58.219801] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:59.220173] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:59.229620] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:00.230110] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:00.237704] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:01.238270] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:01.246711] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:02.247202] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:02.255831] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:03.256276] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:03.264781] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:04.265066] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:04.273169] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:05.273564] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:05.281365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:06.281663] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:06.289234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:07.289661] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:07.297550] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:08.297846] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:08.305594] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:09.305870] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:09.314025] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:10.314534] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:10.322935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:11.323361] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:11.331774] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:12.332070] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:12.341526] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:13.342000] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:13.349907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:14.350220] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:14.358491] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:15.358762] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:15.370590] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:16.370882] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:16.379201] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:17.379535] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:17.388076] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:18.388482] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:18.397357] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:19.397733] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:19.405701] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:20.405988] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:20.413890] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:21.414294] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:21.422268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:22.422732] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:22.431069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:23.431396] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:23.439788] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:24.440072] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:24.447937] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:25.448220] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:25.456763] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:26.457075] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:26.465658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:27.466167] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:27.474851] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:28.475889] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:28.484122] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:29.484401] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:29.492294] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:30.492615] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:30.500187] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:31.500496] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:31.508863] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:32.509356] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:32.517268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:33.517609] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:33.525735] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:34.526147] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:34.534627] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:35.535069] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:35.543730] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:36.544105] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:36.551161] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:37.551547] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:37.559895] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:38.560371] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:38.568745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:39.569032] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:39.576692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:40.577063] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:40.584840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:41.585151] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:41.592768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:42.593085] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:42.601599] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:43.602085] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:43.610238] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:44.610539] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:44.618918] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:45.619229] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:45.627192] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:46.627536] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:46.635747] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:47.636219] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:47.645206] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:48.645707] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:48.653618] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:49.653895] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:49.660961] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:50.661347] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:50.669371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:51.669729] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:51.677755] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:52.678080] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:52.685873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:53.686346] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:53.693412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:54.693914] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:54.701886] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:55.702278] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:55.713076] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:56.713385] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:56.723240] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:57.723720] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:57.731746] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:58.732234] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:58.740417] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:59.740749] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:59.748156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:00.748506] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:00.756688] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:01.757055] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:01.765280] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:02.765890] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:02.773805] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:03.774213] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:03.782075] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:04.782529] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:04.789795] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:05.790213] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:05.798264] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:06.798733] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:06.807286] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:07.807823] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:07.816100] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:08.816550] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:08.824920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:09.825223] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:09.833603] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:10.834128] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:10.842054] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:11.842344] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:11.850333] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:12.850703] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:12.858860] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:13.859368] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:13.867773] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:14.868243] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:14.875909] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:15.876372] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:15.884633] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:16.884909] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:16.892890] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:17.893344] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:17.901802] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:18.902212] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:18.910594] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:19.910889] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:19.918864] end - ✅ in 0.008s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T19:41:19.919031] end - ❌ 900.385s: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:19.919207] start - args=(, 'router-with-refs-pd-test', 'kserve-ci-e2e-test'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:19.925802] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T19:41:21.401389] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'router-with-refs-pd-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-with-refs-pd-router-with-c2ec731e'}, [e2e-llm-inference-service] {'name': 'scheduler-managed-router-with-r-57d1c131'}, [e2e-llm-inference-service] {'name': 'workload-pd-cpu-router-with-ref-d1f07093'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-router-with-r-c22ea8a0'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T19:41:21.425124] end - ✅ in 0.023s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_inference_service] [2026-04-24T19:41:21.425225] end - ❌ 902.290s: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'GatewaysReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:29:37Z', 'severity': 'Info', 'status': 'True', 'type': 'PrefillWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:27Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:26:59Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:26:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_autoscaling_keda_deployment[router-managed-workload-llmd-simulator-no-replicas-scaling-keda] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_keda [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-keda", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-keda-deploy", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_keda_deployment(test_case: TestCase): [e2e-llm-inference-service] """KEDA + Deployment: VA and ScaledObject exist; no HPA; pods scale up under load.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:414: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...-repl-da49e827'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T19:35:42.898540', start_time = 1777059342.8989525 [e2e-llm-inference-service] duration = 900.5347411632538, timestamp_end = '2026-04-24T19:50:43.433694' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...tor-no-repl-da49e827'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cef6aa700> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-keda-d-27e06c40 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-keda-d-27e06c40 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-keda-d-27e06c40 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-da49e827 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-da49e827 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-da49e827 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-keda-autoscale-keda-dep-1ac84077 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-keda-autoscale-keda-dep-1ac84077 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-keda-autoscale-keda-dep-1ac84077 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_keda_deployment] [2026-04-24T19:35:42.832754] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', service_name='autoscale-keda-deploy', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-d-27e06c40'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-da49e827'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T19:35:42.845067] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-d-27e06c40'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-da49e827'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T19:35:42.898381] end - ✅ in 0.053s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T19:35:42.898540] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-d-27e06c40'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-da49e827'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:42.898972] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:42.903838] end - ✅ in 0.005s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:43.904193] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:43.911159] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:44.911417] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:44.919073] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:45.919369] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:45.927177] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:46.927487] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:46.934922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:47.935379] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:47.944048] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:48.944646] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:48.952275] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:49.952857] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:50.027267] end - ✅ in 0.074s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:51.027687] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:51.035353] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:52.035725] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:52.043566] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:53.043874] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:53.052006] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:54.052527] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:54.060611] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:55.060980] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:55.068326] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:56.068671] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:56.127254] end - ✅ in 0.058s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:57.127739] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:57.135796] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:58.136119] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:58.227112] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:35:59.227422] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:35:59.327980] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:00.328396] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:00.335907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:01.336374] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:01.344077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:02.344433] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:02.352124] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:03.352524] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:03.360268] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:04.360632] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:04.369144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:05.369611] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:05.377152] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:06.377766] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:06.385188] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:07.385568] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:07.393356] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:08.393869] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:08.401421] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:09.401749] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:09.408999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:10.409366] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:10.416617] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:11.417019] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:11.427067] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:12.427330] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:12.434555] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:13.434848] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:13.442582] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:14.443028] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:14.450920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:15.451278] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:15.459767] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:16.460081] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:16.467846] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:17.468294] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:17.475900] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:18.476372] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:18.483643] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:19.483933] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:19.491073] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:20.491571] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:20.498489] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:21.498783] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:21.506099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:22.506601] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:22.514632] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:23.514933] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:23.522710] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:24.523032] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:24.531044] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:25.531381] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:25.538686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:26.539099] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:26.546671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:27.547107] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:27.554683] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:28.555123] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:28.562196] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:29.562506] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:29.573133] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:30.573520] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:30.581670] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:31.582107] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:31.588751] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:32.589072] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:32.596367] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:33.596667] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:33.604709] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:34.605009] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:34.615378] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:35.615889] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:35.626161] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:36.626664] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:36.634813] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:37.635089] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:37.642800] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:38.643088] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:38.651017] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:39.651462] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:39.658639] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:40.659077] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:40.667272] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:41.667714] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:41.675354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:42.675768] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:42.683169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:43.683499] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:43.691827] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:44.692258] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:44.699472] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:45.699788] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:45.707060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:46.707377] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:46.716562] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:47.716848] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:47.724424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:48.724735] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:48.732714] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:49.733107] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:49.740042] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:50.740511] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:50.748204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:51.748724] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:51.755911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:52.756188] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:52.763374] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:53.763624] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:53.770784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:54.771134] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:54.779028] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:55.779332] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:55.786937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:56.787340] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:56.794806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:57.795120] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:57.802789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:58.803086] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:58.810483] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:36:59.810757] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:36:59.819027] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:00.819507] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:00.827433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:01.827763] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:01.835369] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:02.835716] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:02.843469] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:03.843761] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:03.851808] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:04.852187] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:04.859729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:05.860016] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:05.867604] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:06.867971] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:06.875482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:07.875756] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:07.883876] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:08.884153] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:08.891423] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:09.891860] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:09.903074] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:10.903351] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:10.915059] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:11.915354] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:11.924205] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:12.924582] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:12.932008] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:13.932376] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:13.942979] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:14.943288] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:14.952733] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:15.953099] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:15.960248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:16.960780] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:16.968913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:17.969335] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:17.976265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:18.976590] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:18.984438] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:19.984923] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:19.993780] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:20.994355] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:21.001785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:22.002059] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:22.009912] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:23.010349] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:23.018076] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:24.018403] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:24.028432] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:25.028832] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:25.036409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:26.036843] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:26.044567] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:27.044956] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:27.052418] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:28.052761] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:28.060539] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:29.060929] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:29.068420] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:30.068718] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:30.076137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:31.076631] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:31.084468] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:32.084802] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:32.096047] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:33.096549] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:33.104614] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:34.104900] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:34.113269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:35.113835] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:35.121708] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:36.122171] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:36.130004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:37.130272] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:37.138120] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:38.138426] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:38.145387] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:39.145702] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:39.152687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:40.153141] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:40.175023] end - ✅ in 0.022s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:41.175404] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:41.183430] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:42.183878] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:42.191611] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:43.191920] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:43.199922] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:44.200383] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:44.208280] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:45.208692] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:45.218716] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:46.218990] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:46.225962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:47.226260] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:47.234472] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:48.234869] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:48.242957] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:49.243224] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:49.250799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:50.251095] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:50.260397] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:51.260856] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:51.268732] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:52.269148] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:52.277637] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:53.278089] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:53.285730] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:54.286159] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:54.293412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:55.293725] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:55.301167] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:56.301533] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:56.308972] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:57.309410] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:57.316683] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:58.316994] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:58.326016] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:37:59.326540] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:37:59.334033] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:00.334529] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:00.341694] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:01.341981] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:01.349379] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:02.349835] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:02.357286] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:03.357729] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:03.365708] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:04.366055] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:04.373944] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:05.374481] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:05.381552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:06.381837] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:06.389557] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:07.390014] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:07.397928] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:08.398515] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:08.406060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:09.406428] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:09.413212] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:10.413537] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:10.420982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:11.421420] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:11.428566] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:12.428874] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:12.436540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:13.436819] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:13.444592] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:14.444880] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:14.453097] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:15.453618] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:15.461328] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:16.461650] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:16.470102] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:17.470628] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:17.477283] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:18.477627] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:18.484928] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:19.485224] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:19.495602] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:20.496045] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:20.503101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:21.503421] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:21.510487] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:22.510781] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:22.518370] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:23.518793] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:23.526392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:24.526701] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:24.534544] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:25.534819] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:25.542789] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:26.543239] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:26.550484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:27.550738] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:27.558632] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:28.558872] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:28.565658] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:29.565938] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:29.575353] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:30.575669] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:30.582780] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:31.583053] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:31.592851] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:32.593126] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:32.612590] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:33.612915] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:33.621613] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:34.621905] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:34.633049] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:35.633383] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:35.640883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:36.641247] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:36.649152] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:37.649448] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:37.656777] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:38.657193] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:38.664855] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:39.665337] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:39.672886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:40.673376] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:40.681162] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:41.681633] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:41.688750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:42.689024] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:42.696542] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:43.696808] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:43.704619] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:44.704891] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:44.712589] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:45.712949] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:45.721198] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:46.721730] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:46.729103] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:47.729548] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:47.737687] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:48.738138] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:48.745109] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:49.745356] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:49.752494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:50.752827] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:50.760847] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:51.761257] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:51.769032] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:52.769381] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:52.776665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:53.776966] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:53.783566] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:54.783975] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:54.791382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:55.791677] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:55.799046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:56.799555] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:56.807265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:57.807737] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:57.814470] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:58.814728] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:58.821571] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:38:59.821923] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:38:59.828824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:00.829119] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:00.836389] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:01.836846] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:01.844388] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:02.844829] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:02.852038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:03.852373] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:03.862487] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:04.862786] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:04.872650] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:05.872986] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:05.880168] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:06.880483] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:06.887646] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:07.887955] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:07.896419] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:08.896923] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:08.903935] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:09.904364] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:09.911777] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:10.912163] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:10.919764] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:11.920202] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:11.927557] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:12.927811] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:12.934574] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:13.934885] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:13.945166] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:14.945699] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:14.953758] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:15.954065] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:15.963407] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:16.963689] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:16.970958] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:17.971332] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:17.978224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:18.978532] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:18.986132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:19.986456] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:19.994038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:20.994408] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:21.001221] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:22.001502] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:22.009395] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:23.009904] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:23.017993] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:24.018341] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:24.026617] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:25.027097] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:25.035490] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:26.035970] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:26.043402] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:27.043840] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:27.051363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:28.051644] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:28.059431] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:29.059871] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:29.068040] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:30.068345] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:30.075234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:31.075777] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:31.083078] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:32.083388] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:32.092068] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:33.092353] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:33.099759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:34.100133] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:34.108499] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:35.108930] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:35.116533] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:36.116930] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:36.124862] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:37.125354] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:37.134015] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:38.134381] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:38.142675] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:39.142977] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:39.149750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:40.150124] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:40.157796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:41.158096] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:41.169709] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:42.170012] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:42.178016] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:43.178506] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:43.185786] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:44.186066] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:44.193378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:45.193696] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:45.202048] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:46.202546] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:46.210215] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:47.210517] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:47.220930] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:48.221319] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:48.230056] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:49.230359] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:49.238684] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:50.238971] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:50.245879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:51.246385] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:51.254198] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:52.254617] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:52.261758] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:53.262187] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:53.269855] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:54.270254] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:54.276959] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:55.277293] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:55.284447] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:56.284842] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:56.291746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:57.292051] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:57.300454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:58.300911] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:58.308199] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:39:59.308534] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:39:59.316258] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:00.316678] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:00.324582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:01.325023] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:01.332937] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:02.333497] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:02.341038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:03.341652] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:03.349352] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:04.349630] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:04.356761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:05.357079] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:05.364672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:06.364905] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:06.372127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:07.372583] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:07.380004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:08.380542] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:08.387501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:09.387819] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:09.395153] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:10.395484] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:10.415373] end - ✅ in 0.020s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:11.415649] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:11.426836] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:12.427078] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:12.440265] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:13.440747] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:13.447630] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:14.447961] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:14.455665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:15.456103] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:15.465904] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:16.466236] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:16.474358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:17.474625] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:17.482328] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:18.482573] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:18.489481] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:19.489749] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:19.497154] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:20.497709] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:20.504781] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:21.505197] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:21.512956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:22.513509] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:22.520700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:23.520989] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:23.529087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:24.529635] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:24.538086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:25.538582] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:25.545700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:26.545997] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:26.552881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:27.553163] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:27.560459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:28.560897] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:28.568932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:29.569234] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:29.579127] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:30.579420] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:30.586892] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:31.587373] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:31.594857] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:32.595379] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:32.602997] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:33.603494] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:33.610757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:34.611114] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:34.618268] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:35.618798] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:35.628284] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:36.628715] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:36.637119] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:37.637650] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:37.645759] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:38.646080] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:38.653779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:39.654062] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:39.661799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:40.662088] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:40.668865] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:41.669146] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:41.676355] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:42.676776] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:42.683931] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:43.684288] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:43.695167] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:44.695483] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:44.703584] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:45.704053] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:45.711916] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:46.712390] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:46.719252] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:47.719835] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:47.727706] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:48.728123] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:48.735568] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:49.735833] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:49.742653] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:50.742958] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:50.750265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:51.750758] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:51.757936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:52.758352] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:52.765707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:53.766030] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:53.773196] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:54.773587] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:54.781737] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:55.782118] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:55.790729] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:56.791154] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:56.801000] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:57.801382] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:57.808831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:58.809358] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:58.816484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:40:59.816776] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:40:59.823926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:00.824392] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:00.831472] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:01.831800] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:01.838807] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:02.839169] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:02.846064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:03.846586] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:03.854290] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:04.854623] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:04.861575] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:05.861813] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:05.869192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:06.869731] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:06.876514] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:07.876932] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:07.884501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:08.884933] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:08.892932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:09.893212] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:09.900198] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:10.900623] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:10.907750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:11.908041] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:11.915037] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:12.915370] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:12.922988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:13.923356] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:13.931177] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:14.931491] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:14.938703] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:15.938980] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:15.946650] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:16.946954] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:16.954595] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:17.954846] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:17.963077] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:18.963423] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:18.971383] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:19.971797] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:19.979804] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:20.980093] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:20.988015] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:21.988355] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:22.130953] end - ✅ in 0.142s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:23.131262] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:23.227779] end - ✅ in 0.096s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:24.228140] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:24.236799] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:25.237181] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:25.327102] end - ✅ in 0.090s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:26.327594] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:26.335160] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:27.335451] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:27.343636] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:28.344085] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:28.427925] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:29.428192] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:29.528050] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:30.528516] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:30.536653] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:31.536922] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:31.544511] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:32.544765] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:32.552581] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:33.552897] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:33.560705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:34.560984] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:34.569583] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:35.569882] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:35.578654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:36.579007] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:36.586941] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:37.587222] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:37.595144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:38.595493] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:38.604882] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:39.605387] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:39.613029] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:40.613368] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:40.621146] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:41.621503] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:41.629349] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:42.629665] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:42.639089] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:43.639493] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:43.647926] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:44.648403] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:44.656722] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:45.657061] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:45.665125] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:46.665520] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:46.674560] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:47.674858] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:47.682904] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:48.683263] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:48.690789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:49.691068] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:49.698001] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:50.698276] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:50.706578] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:51.707058] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:51.714833] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:52.715378] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:52.722711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:53.723009] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:53.730217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:54.730506] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:54.738023] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:55.738585] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:55.746582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:56.746890] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:56.759107] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:57.759701] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:57.767393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:58.767674] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:58.778242] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:41:59.778647] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:41:59.786623] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:00.786908] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:00.794707] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:01.795006] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:01.802797] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:02.803206] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:02.811473] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:03.811727] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:03.820088] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:04.820489] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:04.828397] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:05.828845] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:05.836220] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:06.836579] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:06.846407] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:07.846689] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:07.854487] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:08.854790] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:08.862341] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:09.862622] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:09.870491] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:10.870753] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:10.877848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:11.878223] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:11.886156] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:12.886471] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:12.895403] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:13.895713] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:13.903533] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:14.903815] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:15.008187] end - ✅ in 0.104s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:16.008565] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:16.016493] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:17.017010] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:17.025784] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:18.026075] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:18.033982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:19.034456] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:19.042231] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:20.042713] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:20.051443] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:21.051764] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:21.059530] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:22.059971] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:22.068813] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:23.069242] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:23.076965] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:24.077540] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:24.085576] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:25.085960] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:25.094160] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:26.094790] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:26.102475] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:27.102750] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:27.112570] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:28.113060] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:28.120731] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:29.121125] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:29.129064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:30.129363] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:30.137896] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:31.138344] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:31.146266] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:32.146752] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:32.156277] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:33.156815] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:33.169467] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:34.169961] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:34.178186] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:35.178657] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:35.185767] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:36.186073] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:36.195174] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:37.195503] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:37.203527] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:38.204092] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:38.212498] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:39.212902] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:39.220942] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:40.221416] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:40.229781] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:41.230214] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:41.238062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:42.238537] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:42.246481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:43.246977] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:43.254019] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:44.254357] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:44.262279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:45.262808] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:45.270777] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:46.271056] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:46.278374] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:47.278911] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:47.286567] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:48.287040] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:48.294839] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:49.295323] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:49.303775] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:50.304107] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:50.312412] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:51.312839] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:51.321358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:52.321797] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:52.330847] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:53.331354] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:53.339087] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:54.339498] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:54.347174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:55.347861] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:55.355583] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:56.355865] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:56.363473] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:57.364009] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:57.371834] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:58.372235] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:58.379881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:42:59.380280] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:42:59.387577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:00.387940] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:00.397483] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:01.398020] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:01.406274] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:02.406786] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:02.414510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:03.415003] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:03.423668] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:04.424156] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:04.432103] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:05.432387] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:05.440951] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:06.441493] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:06.450531] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:07.451025] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:07.465667] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:08.466037] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:08.473758] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:09.474168] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:09.481356] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:10.481881] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:10.489456] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:11.489977] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:11.497252] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:12.497828] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:12.505694] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:13.506016] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:13.515477] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:14.515881] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:14.524628] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:15.525115] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:15.532687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:16.533056] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:16.540635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:17.541071] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:17.548861] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:18.549213] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:18.556515] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:19.556975] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:19.565397] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:20.565836] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:20.573115] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:21.573458] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:21.580839] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:22.581188] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:22.589277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:23.589628] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:23.597028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:24.597364] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:24.606438] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:25.606756] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:25.614365] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:26.614781] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:26.622613] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:27.622903] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:27.630802] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:28.631294] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:28.639003] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:29.639340] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:29.646724] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:30.647001] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:30.657094] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:31.657574] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:31.667538] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:32.668041] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:32.675706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:33.676134] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:33.682943] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:34.683375] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:34.691009] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:35.691410] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:35.699157] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:36.699490] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:36.707373] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:37.707646] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:37.715434] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:38.715723] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:38.723444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:39.723766] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:39.731885] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:40.732279] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:40.739822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:41.740279] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:41.748211] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:42.748754] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:42.757216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:43.757698] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:43.765813] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:44.766128] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:44.775588] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:45.776040] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:45.783629] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:46.783954] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:46.792393] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:47.792819] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:47.800247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:48.800585] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:48.808933] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:49.809427] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:49.819394] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:50.819685] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:50.828039] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:51.828520] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:51.836481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:52.836986] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:52.844829] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:53.845260] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:53.852822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:54.853234] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:54.860524] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:55.860817] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:55.868224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:56.868761] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:56.876377] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:57.876841] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:57.884866] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:58.885138] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:58.892566] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:43:59.893036] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:43:59.900930] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:00.901233] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:00.908995] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:01.909349] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:01.917479] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:02.917907] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:02.925952] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:03.926510] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:03.934443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:04.934861] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:04.942347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:05.942651] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:05.950431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:06.950708] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:06.958387] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:07.958715] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:07.966702] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:08.967024] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:08.975178] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:09.975681] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:09.983245] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:10.983721] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:10.991475] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:11.991900] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:12.001111] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:13.001628] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:13.011797] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:14.012110] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:14.019831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:15.020381] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:15.027869] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:16.028217] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:16.035703] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:17.036007] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:17.049371] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:18.049804] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:18.057985] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:19.058521] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:19.066994] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:20.067368] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:20.075608] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:21.076020] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:21.083558] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:22.084012] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:22.091699] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:23.091982] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:23.100499] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:24.100805] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:24.108794] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:25.109054] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:25.116207] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:26.116676] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:26.124835] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:27.125118] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:27.133587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:28.133974] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:28.146291] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:29.147084] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:29.154834] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:30.155367] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:30.163230] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:31.163576] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:31.171426] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:32.171763] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:32.179877] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:33.180382] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:33.187978] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:34.188478] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:34.196134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:35.196584] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:35.205586] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:36.205948] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:36.213921] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:37.214412] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:37.222169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:38.222677] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:38.230352] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:39.230872] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:39.238785] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:40.239053] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:40.246632] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:41.246942] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:41.254988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:42.255277] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:42.262675] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:43.262996] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:43.270774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:44.271178] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:44.278512] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:45.278929] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:45.286508] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:46.286825] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:46.294241] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:47.294546] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:47.302213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:48.302545] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:48.309599] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:49.309888] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:49.317529] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:50.317828] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:50.325233] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:51.325712] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:51.333665] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:52.333946] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:52.342277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:53.342769] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:53.350749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:54.351042] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:54.358621] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:55.358994] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:55.367437] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:56.367871] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:56.375416] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:57.375739] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:57.383646] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:58.384118] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:58.392096] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:44:59.392375] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:44:59.400424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:00.400836] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:00.408953] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:01.409433] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:01.417294] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:02.417855] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:02.425586] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:03.426013] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:03.433535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:04.433831] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:04.441382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:05.441713] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:05.449966] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:06.450460] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:06.458293] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:07.458753] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:07.475125] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:08.475636] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:08.483164] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:09.483690] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:09.491091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:10.491361] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:10.500181] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:11.500626] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:11.508522] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:12.508995] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:12.516695] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:13.517116] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:13.524971] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:14.525293] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:14.533412] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:15.533860] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:15.541180] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:16.541484] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:16.549218] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:17.549594] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:17.557051] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:18.557370] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:18.564776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:19.565231] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:19.573193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:20.573769] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:20.581395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:21.581788] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:21.589497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:22.589844] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:22.597699] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:23.597990] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:23.605276] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:24.605822] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:24.613945] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:25.614223] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:25.621935] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:26.622223] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:26.630390] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:27.630850] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:27.638806] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:28.639252] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:28.646928] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:29.647194] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:29.654335] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:30.654627] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:30.662728] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:31.663180] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:31.670880] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:32.671328] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:32.678865] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:33.679166] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:33.687010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:34.687498] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:34.695242] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:35.695722] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:35.704101] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:36.704396] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:36.719818] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:37.720107] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:37.728460] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:38.728923] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:38.736469] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:39.736790] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:39.745068] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:40.745554] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:40.753285] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:41.753833] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:41.761155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:42.761463] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:42.769779] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:43.770243] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:43.777018] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:44.777354] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:44.785290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:45.785607] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:45.793436] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:46.793746] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:46.801525] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:47.802038] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:47.809678] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:48.810002] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:48.817486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:49.817790] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:49.825090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:50.825640] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:50.833873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:51.834397] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:51.842009] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:52.842367] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:52.849897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:53.850188] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:53.858468] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:54.858910] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:54.865993] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:55.866286] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:55.874795] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:56.875196] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:56.883768] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:57.884288] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:57.892140] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:58.892492] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:58.900034] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:45:59.900415] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:45:59.907944] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:00.908405] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:00.916068] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:01.916620] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:01.924555] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:02.924863] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:02.932653] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:03.932912] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:03.940618] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:04.940915] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:04.949969] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:05.950382] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:05.957952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:06.958353] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:07.032858] end - ✅ in 0.074s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:08.033151] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:08.229942] end - ✅ in 0.196s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:09.230211] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:09.328011] end - ✅ in 0.098s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:10.328279] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:10.428573] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:11.428922] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:11.527767] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:12.528263] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:12.536156] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:13.536469] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:13.544186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:14.544577] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:14.554035] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:15.554516] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:15.562090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:16.562565] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:16.569868] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:17.570184] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:17.578120] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:18.578620] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:18.586433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:19.586889] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:19.626903] end - ✅ in 0.040s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:20.627182] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:20.634351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:21.634725] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:21.642702] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:22.643162] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:22.650742] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:23.651073] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:23.660223] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:24.660673] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:24.668526] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:25.668986] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:25.676988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:26.677349] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:26.685017] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:27.685419] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:27.693137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:28.693514] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:28.701509] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:29.701813] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:29.709527] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:30.709821] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:30.717138] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:31.717671] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:31.725656] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:32.726160] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:32.733589] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:33.733857] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:33.741533] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:34.741980] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:34.753563] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:35.753977] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:35.762166] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:36.762610] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:36.770200] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:37.770604] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:37.778433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:38.778893] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:38.786821] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:39.787195] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:39.794854] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:40.795163] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:40.804036] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:41.804412] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:41.812050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:42.812347] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:42.820036] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:43.820555] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:43.828437] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:44.828695] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:44.836426] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:45.836732] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:45.844943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:46.845483] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:46.854140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:47.854665] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:47.862430] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:48.862898] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:48.870148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:49.870601] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:49.878015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:50.878413] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:50.886068] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:51.886574] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:51.896476] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:52.896934] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:52.906778] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:53.907187] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:53.915321] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:54.915600] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:54.923189] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:55.923487] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:55.931779] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:56.932248] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:56.941541] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:57.941961] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:57.950543] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:58.950809] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:58.958982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:46:59.959523] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:46:59.967861] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:00.968254] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:00.976069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:01.976388] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:01.984520] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:02.984879] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:02.993056] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:03.993495] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:04.000959] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:05.001418] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:05.008894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:06.009174] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:06.016948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:07.017200] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:07.024987] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:08.025279] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:08.033157] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:09.033452] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:09.041805] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:10.042219] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:10.050332] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:11.050618] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:11.058431] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:12.058882] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:12.066837] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:13.067118] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:13.076584] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:14.076997] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:14.084504] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:15.085035] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:15.092553] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:16.092863] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:16.100807] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:17.101247] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:17.119252] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:18.119766] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:18.126911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:19.127183] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:19.134854] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:20.135122] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:20.149497] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:21.150020] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:21.157986] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:22.158292] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:22.166127] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:23.166434] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:23.174945] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:24.175467] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:24.183051] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:25.183432] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:25.191233] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:26.191614] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:26.199377] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:27.199897] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:27.207505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:28.207804] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:28.215595] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:29.216115] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:29.223666] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:30.223996] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:30.239709] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:31.239989] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:31.247476] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:32.247866] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:32.256087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:33.256577] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:33.264066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:34.264524] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:34.275063] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:35.275572] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:35.283571] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:36.284086] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:36.290850] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:37.291131] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:37.298853] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:38.299162] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:38.306249] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:39.306738] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:39.314865] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:40.315349] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:40.323244] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:41.323828] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:41.331510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:42.331796] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:42.339491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:43.339783] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:43.347870] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:44.348272] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:44.356148] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:45.356677] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:45.364604] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:46.364858] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:46.372192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:47.372606] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:47.380426] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:48.380722] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:48.427786] end - ✅ in 0.047s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:49.428222] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:49.436178] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:50.436503] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:50.444997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:51.445316] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:51.452948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:52.453374] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:52.460762] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:53.461085] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:53.468634] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:54.469089] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:54.476266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:55.476585] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:55.527409] end - ✅ in 0.051s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:56.527694] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:56.535534] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:57.535829] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:57.627079] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:58.627625] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:58.636049] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:47:59.636504] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:47:59.644110] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:00.644409] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:00.652968] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:01.653380] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:01.661131] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:02.661618] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:02.669850] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:03.670163] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:03.677842] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:04.678192] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:04.685889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:05.686245] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:05.693429] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:06.693811] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:06.701190] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:07.701492] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:07.709019] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:08.709430] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:08.717229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:09.717634] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:09.724983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:10.725275] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:10.733151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:11.733595] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:11.827403] end - ✅ in 0.094s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:12.827752] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:12.927132] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:13.927427] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:14.027526] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:15.027891] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:15.040584] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:16.040971] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:16.048926] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:17.049210] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:17.131532] end - ✅ in 0.082s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:18.132001] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:18.139929] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:19.140221] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:19.148027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:20.148350] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:20.156502] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:21.156917] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:21.165069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:22.165429] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:22.227247] end - ✅ in 0.062s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:23.227608] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:23.246602] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:24.246939] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:24.327461] end - ✅ in 0.080s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:25.327803] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:25.527580] end - ✅ in 0.199s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:26.527872] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:26.535270] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:27.535740] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:27.626943] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:28.627283] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:28.634256] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:29.634737] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:29.642409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:30.642870] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:30.650974] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:31.651268] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:31.658922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:32.659203] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:32.667806] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:33.668172] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:33.675671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:34.675967] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:34.683180] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:35.683483] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:35.691455] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:36.691884] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:36.699455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:37.699857] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:37.707097] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:38.707519] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:38.714785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:39.715082] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:39.723535] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:40.723805] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:40.732008] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:41.732485] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:41.739622] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:42.739910] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:42.748881] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:43.749265] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:43.757513] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:44.757936] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:44.766019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:45.766323] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:45.774106] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:46.774634] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:46.782185] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:47.782623] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:47.790164] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:48.790664] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:48.797897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:49.798283] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:49.806109] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:50.806409] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:50.813116] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:51.813373] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:51.820950] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:52.821393] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:52.828278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:53.828673] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:53.835991] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:54.836383] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:54.844447] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:55.844776] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:55.852702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:56.852993] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:56.861442] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:57.861858] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:57.869831] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:58.870119] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:58.877452] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:48:59.877970] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:48:59.885979] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:00.886384] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:00.893956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:01.894504] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:01.902038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:02.902319] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:02.910240] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:03.910590] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:03.918369] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:04.918782] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:04.927009] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:05.927389] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:05.934919] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:06.935402] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:06.943458] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:07.943757] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:08.027007] end - ✅ in 0.083s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:09.027370] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:09.127184] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:10.127753] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:10.140368] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:11.140678] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:11.148028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:12.148537] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:12.156727] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:13.157184] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:13.227054] end - ✅ in 0.070s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:14.227389] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:14.235158] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:15.235526] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:15.242358] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:16.242677] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:16.327224] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:17.327550] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:17.334885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:18.335282] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:18.342743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:19.343219] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:19.350539] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:20.350875] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:20.358538] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:21.358795] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:21.370327] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:22.370658] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:22.387968] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:23.388433] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:23.407278] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:24.407693] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:24.427468] end - ✅ in 0.020s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:25.427816] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:25.627605] end - ✅ in 0.199s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:26.627907] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:26.727697] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:27.728022] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:27.735218] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:28.735723] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:28.743891] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:29.744167] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:29.752014] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:30.752348] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:30.760112] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:31.760391] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:31.832979] end - ✅ in 0.072s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:32.833352] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:32.840695] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:33.840983] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:33.848873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:34.849371] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:34.857292] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:35.857624] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:35.865366] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:36.865653] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:36.873459] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:37.873858] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:37.881501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:38.881902] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:38.889217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:39.889674] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:39.897107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:40.897424] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:40.905787] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:41.906092] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:41.913959] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:42.914371] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:42.922097] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:43.922410] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:43.930145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:44.930517] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:44.938087] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:45.938380] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:45.946176] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:46.946485] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:46.954480] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:47.954896] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:47.964971] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:48.965283] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:48.973980] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:49.974415] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:49.981943] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:50.982373] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:50.990008] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:51.990512] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:51.997503] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:52.997823] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:53.006070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:54.006354] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:54.014050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:55.014699] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:55.022622] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:56.022962] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:56.029968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:57.030399] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:57.038169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:58.038643] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:58.046811] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:59.047165] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:59.055090] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:00.055380] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:00.062958] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:01.063283] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:01.071216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:02.071525] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:02.079263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:03.079564] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:03.087528] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:04.087911] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:04.097663] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:05.098093] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:05.105672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:06.106066] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:06.113590] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:07.113837] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:07.121632] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:08.121913] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:08.129275] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:09.129739] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:09.137765] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:10.138049] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:10.145874] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:11.146184] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:11.154099] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:12.154540] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:12.162401] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:13.162855] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:13.172160] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:14.172593] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:14.180204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:15.180627] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:15.188929] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:16.189276] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:16.196103] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:17.196409] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:17.204830] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:18.205250] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:18.223923] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:19.224375] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:19.231097] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:20.231570] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:20.249562] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:21.249872] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:21.256851] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:22.257252] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:22.267556] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:23.267860] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:23.274819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:24.275090] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:24.282217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:25.282736] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:25.289886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:26.290183] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:26.297340] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:27.297634] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:27.304505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:28.304939] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:28.311950] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:29.312268] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:29.319235] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:30.319685] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:30.326561] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:31.326959] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:31.334500] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:32.334864] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:32.342227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:33.342568] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:33.350160] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:34.350570] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:34.358962] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:35.359259] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:35.367193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:36.367733] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:36.375345] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:37.375773] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:37.383481] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:38.383853] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:38.391450] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:39.391817] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:39.399535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:40.400076] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:40.408047] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:41.408383] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:41.416025] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:42.416404] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:42.424602] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:43.425070] start - args=(, 'autoscale-keda-deploy', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:43.433478] end - ✅ in 0.008s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T19:50:43.433694] end - ❌ 900.535s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T19:50:43.433943] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-deploy', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-d-27e06c40'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-da49e827'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-dep-1ac84077'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T19:50:43.454006] end - ✅ in 0.020s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_keda_deployment] [2026-04-24T19:50:43.454120] end - ❌ 900.621s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:35:45Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-deploy-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_inference_service[router-managed-scheduler-with-configmap-ref-workload-llmd-simulator] _ [e2e-llm-inference-service] [gw1] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'scheduler-with-configmap-ref', 'workload-llmd-simulator'], prompt='KServe is a'... {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.asyncio(loop_scope="session") [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-with-gateway-ref", [e2e-llm-inference-service] "router-with-managed-route", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] endpoint="/v1/completions", [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] payload_formatter=completions_payload, [e2e-llm-inference-service] response_assertion=create_response_assertion(with_field="choices"), [e2e-llm-inference-service] before_test=[ [e2e-llm-inference-service] lambda: create_router_resources( [e2e-llm-inference-service] gateways=[ROUTER_GATEWAYS[0]], [e2e-llm-inference-service] ) [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] payload_formatter=completions_payload, [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-custom-route-timeout", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="custom-route-timeout-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-with-refs", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="router-with-refs-test", [e2e-llm-inference-service] before_test=[ [e2e-llm-inference-service] lambda: create_router_resources( [e2e-llm-inference-service] gateways=[ROUTER_GATEWAYS[0]], [e2e-llm-inference-service] routes=[ROUTER_ROUTES[0], ROUTER_ROUTES[1]], [e2e-llm-inference-service] ) [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=["router-managed", "workload-pd-cpu", "model-fb-opt-125m"], [e2e-llm-inference-service] prompt="You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. " [e2e-llm-inference-service] "Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. " [e2e-llm-inference-service] "Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.", [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-custom-route-timeout-pd", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-pd-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. " [e2e-llm-inference-service] "Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. " [e2e-llm-inference-service] "Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.", [e2e-llm-inference-service] service_name="custom-route-timeout-pd-test", [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-with-refs-pd", [e2e-llm-inference-service] "scheduler-managed", [e2e-llm-inference-service] "workload-pd-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="You are an expert in Kubernetes-native machine learning serving platforms, with deep knowledge of the KServe project. " [e2e-llm-inference-service] "Explain the challenges of serving large-scale models, GPU scheduling, and how KServe integrates with capabilities like multi-model serving. " [e2e-llm-inference-service] "Provide a detailed comparison with open source alternatives, focusing on operational trade-offs.", [e2e-llm-inference-service] service_name="router-with-refs-pd-test", [e2e-llm-inference-service] response_assertion=assert_200_with_choices, [e2e-llm-inference-service] before_test=[ [e2e-llm-inference-service] lambda: create_router_resources( [e2e-llm-inference-service] gateways=[ROUTER_GATEWAYS[1]], [e2e-llm-inference-service] routes=[ROUTER_ROUTES[2], ROUTER_ROUTES[3]], [e2e-llm-inference-service] ) [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-dp-ep-gpu", [e2e-llm-inference-service] "workload-dp-ep-prefill-gpu", [e2e-llm-inference-service] "model-deepseek-v2-lite", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="Delve into the multifaceted implications of a fully disaggregated cloud architecture, specifically " [e2e-llm-inference-service] "where the compute plane (P) and the data plane (D) are independently deployed and managed for a " [e2e-llm-inference-service] "geographically distributed, high-throughput, low-latency microservices ecosystem. Beyond the " [e2e-llm-inference-service] "fundamental challenges of network latency and data consistency, elaborate on the advanced " [e2e-llm-inference-service] "considerations and trade-offs inherent in such a setup: 1. Network Architecture and Protocols: " [e2e-llm-inference-service] "How would the network fabric and underlying protocols (e.g., RDMA, custom transport layers) need to " [e2e-llm-inference-service] "evolve to support optimal performance and minimize inter-plane communication overhead, especially for " [e2e-llm-inference-service] "synchronous operations? Discuss the role of network programmability (e.g., SDN, P4) in dynamically " [e2e-llm-inference-service] "optimizing routing and traffic flow between P and D. 2. Advanced Data Consistency and Durability: " [e2e-llm-inference-service] "Explore sophisticated data consistency models (e.g., causal consistency, strong eventual consistency) " [e2e-llm-inference-service] "and their applicability in balancing performance and data integrity across a globally distributed data plane. " [e2e-llm-inference-service] "Detail strategies for ensuring data durability and fault tolerance, including multi-region replication, " [e2e-llm-inference-service] "intelligent partitioning, and recovery mechanisms in the event of partial or full plane failures. " [e2e-llm-inference-service] "3. Dynamic Resource Orchestration and Cost Optimization: Analyze how an orchestration layer would intelligently " [e2e-llm-inference-service] "manage the independent scaling of compute (P) and data (D) resources, considering fluctuating workloads, " [e2e-llm-inference-service] "cost efficiency, and performance targets (e.g., using predictive analytics for resource provisioning). " [e2e-llm-inference-service] "Discuss mechanisms for dynamically reallocating compute nodes to different data partitions based on " [e2e-llm-inference-service] "workload patterns and data locality, potentially involving live migration strategies. " [e2e-llm-inference-service] "4. Security and Compliance in a Distributed Landscape: Address the enhanced security perimeter " [e2e-llm-inference-service] "challenges, including securing communication channels between P and D (encryption in transit, mutual TLS), " [e2e-llm-inference-service] "fine-grained access control to data at rest and in motion, and identity management across disaggregated " [e2e-llm-inference-service] "components. Discuss how such an architecture impacts compliance with regulatory frameworks (e.g., GDPR, HIPAA) " [e2e-llm-inference-service] "concerning data sovereignty, privacy, and auditability. 5. Operational Complexity and Observability: " [e2e-llm-inference-service] "Examine the increased complexity in monitoring, logging, and tracing across highly decoupled compute and " [e2e-llm-inference-service] "data planes. What specialized tooling and practices (e.g., distributed tracing with OpenTelemetry, advanced AIOps) " [e2e-llm-inference-service] "would be essential? How would incident response and troubleshooting differ in this disaggregated environment " [e2e-llm-inference-service] "compared to traditional integrated systems? Consider the challenges of pinpointing root causes across " [e2e-llm-inference-service] "independent failures. 6. Real-world Applicability and Future Trends: Identify specific industries " [e2e-llm-inference-service] "or use cases (e.g., high-frequency trading, IoT edge processing, large language model inference) " [e2e-llm-inference-service] "where the benefits of P/D disaggregation would strongly outweigh its complexities. " [e2e-llm-inference-service] "Conclude by speculating on emerging technologies or paradigms (e.g., serverless compute functions " [e2e-llm-inference-service] "directly interacting with object storage, in-memory disaggregation) that could further drive or " [e2e-llm-inference-service] "transform P/D disaggregation in cloud computing.", [e2e-llm-inference-service] max_tokens=2000, [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_gpu, [e2e-llm-inference-service] pytest.mark.cluster_nvidia, [e2e-llm-inference-service] pytest.mark.cluster_nvidia_roce, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-no-scheduler", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="What is KServe?", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.no_scheduler, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-simulated-dp-ep-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="This test simulates DP+EP that can run on CPU, the idea is to test the LWS-based deployment, " [e2e-llm-inference-service] "but without the resources requirements for DP+EP (GPUs and ROCe/IB).", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_multi_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] # Scheduler config tests [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-inline-config", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="scheduler-inline-config-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-configmap-ref", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="scheduler-configmap-ref-test", [e2e-llm-inference-service] before_test=[create_scheduler_configmap], [e2e-llm-inference-service] after_test=[delete_scheduler_configmap], [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-replicas", [e2e-llm-inference-service] "workload-llmd-simulator", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="scheduler-ha-replicas-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] # Precise prefix KV cache routing test [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "scheduler-with-precise-prefix-cache-inline-config", [e2e-llm-inference-service] "workload-llmd-simulator-kvcache", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="precise-prefix-cache-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_inference_service(test_case: TestCase): # noqa: F811 [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = KServeClient( [e2e-llm-inference-service] config_file=os.environ.get("KUBECONFIG", "~/.kube/config"), [e2e-llm-inference-service] client_configuration=client.Configuration(), [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] if not test_case.llm_service.metadata.annotations: [e2e-llm-inference-service] test_case.llm_service.metadata.annotations = {} [e2e-llm-inference-service] [e2e-llm-inference-service] test_case.llm_service.metadata.annotations[ [e2e-llm-inference-service] "security.opendatahub.io/enable-auth" [e2e-llm-inference-service] ] = "false" [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:410: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...ef-sc-67492bcd'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T19:49:05.091923', start_time = 1777060145.0923634 [e2e-llm-inference-service] duration = 900.5100934505463, timestamp_end = '2026-04-24T20:04:05.602457' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security....gmap-ref-sc-67492bcd'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f60bd602ac0> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1372 Created ConfigMap scheduler-config-e2e in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-scheduler-config-305f7a8b in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-scheduler-config-305f7a8b [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-scheduler-config-305f7a8b [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scheduler-with-configmap-ref-sc-67492bcd in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scheduler-with-configmap-ref-sc-67492bcd [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scheduler-with-configmap-ref-sc-67492bcd [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-schedul-b2f159da in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-schedul-b2f159da [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-schedul-b2f159da [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_inference_service] [2026-04-24T19:49:05.037813] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'scheduler-with-configmap-ref', 'workload-llmd-simulator'], prompt='KServe is a', service_name='scheduler-configmap-ref-test', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'scheduler-configmap-ref-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-scheduler-config-305f7a8b'}, [e2e-llm-inference-service] {'name': 'scheduler-with-configmap-ref-sc-67492bcd'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T19:49:05.050792] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'scheduler-configmap-ref-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-scheduler-config-305f7a8b'}, [e2e-llm-inference-service] {'name': 'scheduler-with-configmap-ref-sc-67492bcd'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T19:49:05.091704] end - ✅ in 0.041s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T19:49:05.091923] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'scheduler-configmap-ref-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-scheduler-config-305f7a8b'}, [e2e-llm-inference-service] {'name': 'scheduler-with-configmap-ref-sc-67492bcd'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:05.092372] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:05.098677] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:06.099074] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:06.127067] end - ✅ in 0.028s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:07.127449] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:07.134720] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:08.134970] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:08.141038] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:09.141272] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:09.147852] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:10.148238] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:10.159134] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:11.159571] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:11.168682] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:12.168924] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:12.175971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:13.176355] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:13.226950] end - ✅ in 0.050s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:14.227232] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:14.234974] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:15.235257] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:15.242060] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:16.242324] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:16.327214] end - ✅ in 0.085s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:17.327532] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:17.334797] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:18.335233] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:18.342393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:19.342822] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:19.349987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:20.350394] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:20.358039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:21.358500] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:21.365565] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:22.365903] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:22.387820] end - ✅ in 0.022s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:23.388227] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:23.410151] end - ✅ in 0.022s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:24.410614] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:24.429967] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:25.430366] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:25.627463] end - ✅ in 0.197s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:26.627810] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:26.727858] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:27.728185] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:27.735718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:28.736046] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:28.744088] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:29.744402] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:29.752430] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:30.752724] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:30.760705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:31.761026] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:31.832954] end - ✅ in 0.072s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:32.833355] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:32.840804] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:33.841116] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:33.848943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:34.849379] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:34.857420] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:35.857709] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:35.865440] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:36.865731] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:36.873468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:37.873856] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:37.881248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:38.881570] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:38.889082] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:39.889374] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:39.897393] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:40.897813] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:40.905849] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:41.906122] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:41.913989] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:42.914294] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:42.922371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:43.922818] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:43.930205] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:44.930496] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:44.938506] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:45.938959] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:45.946505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:46.946811] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:46.954330] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:47.954615] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:47.965145] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:48.965431] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:48.973720] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:49.974005] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:49.982110] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:50.982534] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:50.989722] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:51.990021] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:51.997587] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:52.997891] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:53.006062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:54.006354] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:54.013701] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:55.013953] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:55.021468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:56.021881] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:56.029803] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:57.030095] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:57.038472] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:58.038898] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:58.046550] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:49:59.046925] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:49:59.055087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:00.055372] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:00.062789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:01.063088] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:01.071037] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:02.071357] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:02.079353] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:03.079646] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:03.087481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:04.087846] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:04.097722] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:05.098157] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:05.105590] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:06.105901] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:06.113673] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:07.113961] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:07.121634] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:08.121894] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:08.129262] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:09.129630] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:09.137668] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:10.137971] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:10.145848] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:11.146129] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:11.154140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:12.154614] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:12.162387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:13.162826] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:13.171875] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:14.172170] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:14.180622] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:15.181035] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:15.187743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:16.188053] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:16.196247] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:17.196566] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:17.204830] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:18.205227] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:18.222752] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:19.223179] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:19.230898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:20.231367] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:20.239215] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:21.239539] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:21.246896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:22.247202] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:22.255081] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:23.255345] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:23.263644] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:24.263892] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:24.275437] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:25.275711] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:25.283517] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:26.283950] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:26.292011] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:27.292280] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:27.300606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:28.301048] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:28.308913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:29.309237] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:29.317974] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:30.318275] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:30.326038] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:31.326383] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:31.334366] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:32.334653] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:32.342358] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:33.342636] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:33.349948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:34.350340] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:34.358586] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:35.359125] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:35.367409] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:36.367728] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:36.375410] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:37.375777] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:37.383470] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:38.383747] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:38.391193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:39.391513] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:39.399435] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:40.399878] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:40.407931] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:41.408173] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:41.415925] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:42.416183] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:42.424838] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:43.425201] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:43.433739] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:44.434142] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:44.444105] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:45.444357] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:45.452619] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:46.452941] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:46.461235] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:47.461541] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:47.469413] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:48.469697] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:48.477263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:49.477584] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:49.527643] end - ✅ in 0.050s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:50.527902] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:50.535621] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:51.535876] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:51.543724] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:52.544010] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:52.627427] end - ✅ in 0.083s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:53.627851] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:53.635610] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:54.635939] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:54.643056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:55.643372] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:55.651288] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:56.651768] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:56.659745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:57.660139] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:57.669132] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:58.669398] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:58.677216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:59.677509] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:59.684918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:00.685379] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:00.692900] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:01.693360] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:01.701366] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:02.701636] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:02.709378] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:03.709687] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:03.717612] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:04.717899] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:04.727240] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:05.727707] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:05.735774] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:06.736055] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:06.745523] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:07.745887] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:07.753531] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:08.753939] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:08.762011] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:09.762332] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:09.770185] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:10.770488] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:10.777969] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:11.778243] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:11.786083] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:12.786384] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:12.793582] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:13.793856] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:13.802360] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:14.802646] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:14.810717] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:15.810990] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:15.819474] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:16.819767] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:16.827565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:17.827848] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:17.835775] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:18.836190] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:18.844269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:19.844704] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:19.853018] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:20.853356] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:20.861002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:21.861504] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:21.870068] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:22.870503] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:22.878608] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:23.878874] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:23.886289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:24.886594] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:24.894370] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:25.894641] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:25.902693] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:26.903125] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:26.910494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:27.910789] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:27.918201] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:28.918545] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:28.926174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:29.926497] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:29.935370] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:30.935646] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:30.943186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:31.943589] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:31.951469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:32.951758] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:32.959322] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:33.959609] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:33.968437] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:34.968927] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:34.976895] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:35.977174] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:35.984912] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:36.985224] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:36.993044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:37.993355] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:38.001501] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:39.002010] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:39.009947] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:40.010510] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:40.018075] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:41.018335] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:41.025646] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:42.025946] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:42.033818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:43.034265] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:43.042021] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:44.042476] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:44.050247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:45.050558] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:45.058971] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:46.059248] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:46.067054] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:47.067448] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:47.075705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:48.075995] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:48.083274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:49.083818] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:49.091818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:50.092097] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:50.099783] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:51.100060] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:51.108111] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:52.108634] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:52.116932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:53.117209] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:53.124975] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:54.125246] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:54.133213] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:55.133676] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:55.141150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:56.141458] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:56.149130] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:57.149406] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:57.157080] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:58.157336] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:58.165129] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:59.165522] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:59.173356] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:00.173630] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:00.181466] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:01.181769] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:01.189667] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:02.189938] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:02.198076] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:03.198584] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:03.207716] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:04.207967] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:04.216289] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:05.216645] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:05.225286] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:06.225788] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:06.233715] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:07.234210] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:07.242549] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:08.242967] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:08.250409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:09.250820] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:09.258202] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:10.258510] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:10.266146] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:11.266429] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:11.274244] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:12.274519] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:12.281349] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:13.281612] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:13.289413] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:14.289699] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:14.298134] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:15.298574] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:15.307557] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:16.308054] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:16.316019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:17.316526] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:17.324811] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:18.325107] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:18.334049] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:19.334537] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:19.342749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:20.343022] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:20.350695] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:21.350997] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:21.358102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:22.358440] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:22.366514] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:23.366822] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:23.374593] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:24.374888] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:24.383122] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:25.383394] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:25.427395] end - ✅ in 0.044s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:26.427628] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:26.435150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:27.435488] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:27.443160] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:28.443478] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:28.450773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:29.451062] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:29.458943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:30.459219] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:30.466169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:31.466537] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:31.475247] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:32.475607] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:32.483744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:33.484018] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:33.491860] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:34.492155] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:34.500940] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:35.501252] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:35.508947] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:36.509259] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:36.518801] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:37.519145] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:37.527438] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:38.527739] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:38.535379] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:39.535678] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:39.543720] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:40.544021] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:40.552323] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:41.552571] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:41.559944] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:42.560412] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:42.568452] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:43.568739] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:43.576586] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:44.576867] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:44.589635] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:45.589944] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:45.597740] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:46.598041] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:46.605932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:47.606231] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:47.613942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:48.614335] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:48.621921] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:49.622210] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:49.629905] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:50.630182] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:50.638019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:51.638292] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:51.645768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:52.646118] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:52.654113] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:53.654361] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:53.663203] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:54.663513] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:54.671223] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:55.671528] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:55.679438] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:56.679849] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:56.687550] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:57.687841] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:57.695435] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:58.695679] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:58.703389] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:59.703741] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:59.711963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:00.712398] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:00.719736] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:01.720044] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:01.728200] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:02.728699] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:02.736939] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:03.737355] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:03.744916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:04.745200] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:04.753171] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:05.753617] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:05.761871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:06.762381] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:06.770254] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:07.770671] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:07.778365] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:08.778754] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:08.786417] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:09.786741] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:09.794401] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:10.794681] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:10.803125] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:11.803492] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:11.810965] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:12.811392] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:12.819459] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:13.819732] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:13.828078] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:14.828585] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:14.836546] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:15.836831] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:15.844733] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:16.845007] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:16.852822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:17.853091] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:17.861564] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:18.862053] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:18.869914] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:19.870192] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:19.878009] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:20.878502] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:20.885758] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:21.886200] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:21.896090] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:22.896379] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:22.904230] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:23.904495] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:23.912290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:24.912610] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:24.920094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:25.920374] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:25.928606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:26.928905] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:26.936105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:27.936543] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:27.944416] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:28.944816] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:28.952076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:29.952581] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:29.960465] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:30.960835] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:30.969012] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:31.969519] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:31.977395] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:32.977810] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:32.985442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:33.985913] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:33.993426] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:34.993914] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:35.001494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:36.001810] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:36.009502] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:37.009799] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:37.017243] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:38.017553] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:38.025390] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:39.025691] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:39.034023] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:40.034517] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:40.042353] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:41.042626] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:41.050629] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:42.050858] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:42.058261] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:43.058662] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:43.066702] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:44.066957] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:44.074856] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:45.075119] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:45.082072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:46.082363] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:46.091266] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:47.091735] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:47.099357] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:48.099632] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:48.107825] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:49.108108] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:49.115989] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:50.116258] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:50.123967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:51.124233] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:51.132080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:52.132540] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:52.142134] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:53.142453] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:53.150139] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:54.150485] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:54.158174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:55.158641] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:55.166549] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:56.166801] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:56.173982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:57.174370] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:57.182448] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:58.182761] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:58.190226] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:59.190687] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:59.197968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:00.198280] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:00.206408] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:01.206872] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:01.214903] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:02.215160] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:02.222937] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:03.223454] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:03.231497] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:04.231812] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:04.239838] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:05.240182] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:05.248011] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:06.248324] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:06.255932] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:07.256354] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:07.266282] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:08.266606] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:08.274118] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:09.274660] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:09.281626] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:10.281898] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:10.289896] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:11.290294] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:11.297848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:12.298176] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:12.305257] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:13.305713] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:13.313696] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:14.314002] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:14.321490] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:15.321934] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:15.330424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:16.330813] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:16.338650] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:17.339041] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:17.346954] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:18.347193] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:18.354292] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:19.354576] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:19.362147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:20.362481] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:20.370200] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:21.370476] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:21.377800] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:22.378092] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:22.386587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:23.386869] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:23.394476] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:24.394747] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:24.401867] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:25.402169] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:25.410529] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:26.410771] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:26.417926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:27.418195] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:27.426243] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:28.426722] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:28.434754] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:29.434990] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:29.442420] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:30.442848] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:30.450875] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:31.451127] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:31.459171] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:32.459494] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:32.467635] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:33.468076] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:33.476396] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:34.476771] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:34.483600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:35.483902] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:35.492777] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:36.493204] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:36.501026] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:37.501415] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:37.510030] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:38.510505] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:38.517790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:39.518167] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:39.525359] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:40.525651] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:40.533276] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:41.533610] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:41.541815] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:42.542088] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:42.549069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:43.549360] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:43.557262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:44.557786] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:44.566718] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:45.567020] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:45.575098] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:46.575372] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:46.582641] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:47.582915] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:47.590127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:48.590385] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:48.597946] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:49.598276] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:49.605973] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:50.606346] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:50.613970] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:51.614371] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:51.622352] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:52.622643] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:52.630449] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:53.630950] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:53.638580] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:54.638866] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:54.646909] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:55.647203] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:55.654689] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:56.654967] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:56.662797] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:57.663109] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:57.670694] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:58.670967] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:58.677938] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:59.678233] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:59.686716] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:00.686992] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:00.694167] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:01.694469] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:01.702206] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:02.702521] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:02.710088] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:03.710372] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:03.717957] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:04.718482] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:04.726171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:05.726538] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:05.734416] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:06.734652] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:06.742742] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:07.743192] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:07.751043] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:08.751558] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:08.759550] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:09.759807] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:09.766953] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:10.767239] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:10.774761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:11.775046] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:11.782884] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:12.783194] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:12.791219] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:13.791512] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:13.798868] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:14.799366] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:14.806938] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:15.807187] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:15.814701] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:16.814963] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:16.822521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:17.822798] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:17.830751] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:18.831024] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:18.838459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:19.838917] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:19.846980] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:20.847439] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:20.854870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:21.855345] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:21.863416] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:22.863702] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:22.870849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:23.871164] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:23.878497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:24.879002] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:24.886268] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:25.886587] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:25.895124] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:26.895760] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:26.903567] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:27.903943] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:27.912595] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:28.912975] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:28.920925] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:29.921181] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:29.929015] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:30.929282] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:30.936427] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:31.936879] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:31.944631] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:32.945117] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:32.954726] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:33.955175] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:33.963392] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:34.963685] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:34.971899] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:35.972348] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:35.981073] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:36.981539] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:36.989086] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:37.989376] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:37.997610] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:38.997905] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:39.005719] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:40.005992] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:40.014204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:41.014678] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:41.022348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:42.022813] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:42.030986] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:43.031269] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:43.039553] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:44.039863] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:44.047607] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:45.047890] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:45.055076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:46.055363] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:46.063171] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:47.063465] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:47.072576] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:48.072901] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:48.080660] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:49.081002] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:49.088414] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:50.088878] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:50.096554] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:51.096873] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:51.104947] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:52.105356] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:52.114104] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:53.114411] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:53.122373] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:54.122734] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:54.138620] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:55.138913] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:55.146412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:56.146700] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:56.154626] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:57.155003] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:57.163745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:58.164091] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:58.171598] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:59.171905] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:59.179837] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:00.180196] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:00.201976] end - ✅ in 0.022s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:01.202327] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:01.209525] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:02.209944] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:02.217140] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:03.217469] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:03.225946] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:04.226397] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:04.235047] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:05.235591] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:05.243415] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:06.243690] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:06.252085] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:07.252586] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:07.261399] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:08.261792] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:08.269444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:09.269828] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:09.277487] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:10.277896] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:10.286224] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:11.286696] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:11.293922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:12.294214] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:12.301966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:13.302249] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:13.310469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:14.310915] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:14.319078] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:15.319382] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:15.327292] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:16.327617] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:16.336955] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:17.337368] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:17.345887] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:18.346379] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:18.354504] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:19.354780] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:19.362607] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:20.362898] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:20.370738] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:21.371038] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:21.379254] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:22.379710] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:22.388087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:23.388594] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:23.396837] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:24.397349] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:24.405270] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:25.405818] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:25.414111] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:26.414565] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:26.422453] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:27.422779] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:27.430662] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:28.430969] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:28.438644] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:29.438948] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:29.446721] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:30.447053] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:30.455194] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:31.455492] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:31.462897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:32.463229] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:32.471080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:33.471507] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:33.479062] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:34.479348] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:34.487034] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:35.487517] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:35.494616] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:36.494905] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:36.502902] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:37.503236] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:37.518238] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:38.518756] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:38.526248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:39.526729] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:39.534899] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:40.535186] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:40.542980] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:41.543263] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:41.551450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:42.551880] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:42.559804] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:43.560185] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:43.568161] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:44.568416] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:44.577025] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:45.577343] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:45.587965] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:46.588509] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:46.597241] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:47.597586] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:47.604821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:48.605117] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:48.612756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:49.613010] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:49.620201] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:50.620612] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:50.628741] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:51.629019] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:51.636945] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:52.637365] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:52.645529] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:53.645950] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:53.654139] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:54.654620] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:54.662278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:55.662753] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:55.670901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:56.671176] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:56.679850] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:57.680437] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:57.688411] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:58.688710] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:58.696462] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:59.696960] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:59.704640] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:00.705032] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:00.712585] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:01.712879] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:01.721069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:02.721545] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:02.729232] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:03.729660] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:03.737802] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:04.738107] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:04.746429] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:05.746779] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:05.755250] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:06.755789] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:06.764626] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:07.764958] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:07.772838] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:08.773119] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:08.780949] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:09.781277] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:09.788912] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:10.789199] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:10.797247] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:11.797593] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:11.805294] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:12.805745] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:12.815404] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:13.815713] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:13.823185] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:14.823522] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:14.831481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:15.831746] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:15.839045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:16.839366] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:16.849477] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:17.849771] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:17.856979] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:18.857342] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:18.864945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:19.865230] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:19.873488] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:20.873883] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:20.881846] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:21.882189] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:21.889897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:22.890153] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:22.897232] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:23.897551] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:23.904989] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:24.905236] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:24.912578] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:25.912897] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:25.921066] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:26.921345] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:26.929441] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:27.929747] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:27.940693] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:28.940993] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:28.948252] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:29.948554] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:29.955787] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:30.956213] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:30.963861] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:31.964134] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:31.971478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:32.971764] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:32.979364] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:33.979631] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:33.986933] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:34.987224] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:34.994717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:35.994955] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:36.002211] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:37.002550] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:37.009837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:38.010091] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:38.017346] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:39.017644] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:39.024968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:40.025345] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:40.032977] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:41.033276] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:41.041092] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:42.041381] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:42.048843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:43.049271] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:43.057264] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:44.057586] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:44.073776] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:45.074412] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:45.082385] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:46.082672] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:46.090985] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:47.091468] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:47.099892] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:48.100219] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:48.108087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:49.108474] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:49.116642] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:50.116932] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:50.124982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:51.125278] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:51.133469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:52.133932] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:52.141956] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:53.142250] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:53.150131] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:54.150407] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:54.162501] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:55.163026] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:55.171204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:56.171558] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:56.179330] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:57.179658] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:57.187854] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:58.188142] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:58.196259] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:59.196603] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:59.204808] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:00.205216] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:00.217915] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:01.218368] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:01.226779] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:02.227099] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:02.234832] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:03.235149] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:03.243128] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:04.243473] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:04.251907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:05.252197] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:05.260934] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:06.261359] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:06.269043] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:07.269351] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:07.277597] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:08.277895] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:08.285467] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:09.285706] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:09.293094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:10.293383] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:10.300921] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:11.301257] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:11.310921] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:12.311144] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:12.319283] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:13.319574] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:13.327454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:14.327726] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:14.336200] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:15.336639] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:15.344633] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:16.344902] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:16.352900] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:17.353349] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:17.360954] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:18.361416] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:18.369867] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:19.370167] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:19.378466] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:20.378919] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:20.386918] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:21.387198] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:21.395002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:22.395344] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:22.402951] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:23.403237] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:23.410757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:24.411033] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:24.418582] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:25.418917] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:25.427626] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:26.427985] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:26.436073] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:27.436448] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:27.444421] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:28.444693] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:28.451987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:29.452356] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:29.460043] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:30.460545] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:30.468222] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:31.468841] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:31.476635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:32.477095] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:32.484454] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:33.484830] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:33.492849] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:34.493151] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:34.501446] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:35.501770] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:35.509697] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:36.510144] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:36.517885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:37.518183] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:37.526735] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:38.527014] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:38.535089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:39.535369] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:39.543351] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:40.543773] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:40.550881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:41.551191] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:41.558939] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:42.559377] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:42.566613] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:43.566981] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:43.575385] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:44.575651] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:44.584489] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:45.584765] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:45.598800] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:46.599094] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:46.607261] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:47.607772] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:47.616879] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:48.617175] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:48.625170] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:49.625704] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:49.634221] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:50.634564] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:50.641827] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:51.642152] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:51.649470] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:52.649803] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:52.657267] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:53.657729] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:53.665794] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:54.666240] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:54.675032] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:55.675374] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:55.683009] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:56.683354] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:56.691126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:57.691546] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:57.698711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:58.698988] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:58.706627] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:59.706899] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:59.714717] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:00.714996] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:00.722881] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:01.723199] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:01.737157] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:02.737719] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:02.745596] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:03.745881] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:03.752980] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:04.753291] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:04.760589] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:05.761054] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:05.768979] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:06.769261] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:06.776773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:07.777088] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:07.784897] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:08.785362] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:08.794750] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:09.795007] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:09.802354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:10.802625] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:10.810030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:11.810347] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:11.818478] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:12.818836] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:12.826461] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:13.826765] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:13.834050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:14.834361] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:14.842479] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:15.842803] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:15.850688] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:16.850956] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:16.858340] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:17.858768] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:17.869250] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:18.869772] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:18.877186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:19.877648] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:19.885478] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:20.885786] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:20.892987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:21.893565] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:21.901602] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:22.902088] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:22.909670] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:23.910101] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:23.918086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:24.918440] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:24.925794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:25.926091] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:25.934080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:26.934536] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:26.942228] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:27.942635] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:27.951004] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:28.951537] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:28.959530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:29.959840] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:29.968600] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:30.968855] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:30.976594] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:31.976945] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:31.984923] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:32.985193] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:32.992796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:33.993118] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:34.000796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:35.001105] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:35.008484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:36.008849] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:36.016431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:37.016754] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:37.024828] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:38.025246] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:38.033418] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:39.033747] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:39.041231] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:40.041535] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:40.048548] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:41.048814] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:41.056597] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:42.056920] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:42.069482] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:43.069747] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:43.079491] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:44.079896] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:44.088169] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:45.088722] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:45.096413] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:46.096846] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:46.105079] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:47.105556] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:47.113388] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:48.113727] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:48.121660] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:49.122059] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:49.131959] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:50.132246] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:50.140126] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:51.140397] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:51.148108] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:52.148358] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:52.156006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:53.156376] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:53.164285] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:54.164622] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:54.172272] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:55.172620] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:55.180376] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:56.180689] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:56.188585] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:57.188847] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:57.196473] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:58.196793] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:58.204621] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:59.204941] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:59.212664] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:00.212983] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:00.221363] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:01.221853] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:01.229720] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:02.230039] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:02.237929] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:03.238204] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:03.249547] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:04.250008] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:04.258406] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:05.258695] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:05.270402] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:06.270651] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:06.277922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:07.278352] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:07.286089] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:08.286457] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:08.294441] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:09.294732] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:09.302166] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:10.302545] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:10.310657] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:11.310914] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:11.318294] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:12.318723] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:12.334597] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:13.334930] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:13.342655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:14.342971] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:14.351514] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:15.351902] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:15.359631] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:16.359896] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:16.367905] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:17.368232] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:17.375979] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:18.376277] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:18.383804] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:19.384119] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:19.392028] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:20.392384] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:20.404407] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:21.404821] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:21.412861] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:22.413352] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:22.422014] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:23.422558] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:23.430815] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:24.431123] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:24.438795] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:25.439107] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:25.447084] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:26.447411] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:26.454824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:27.455112] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:27.462703] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:28.462981] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:28.470740] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:29.471049] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:29.478782] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:30.479046] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:30.486899] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:31.487186] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:31.494686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:32.495074] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:32.503043] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:33.503378] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:33.511021] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:34.511376] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:34.519887] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:35.520281] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:35.528012] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:36.528361] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:36.536140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:37.536541] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:37.544448] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:38.544744] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:38.552152] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:39.552547] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:39.559925] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:40.560238] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:40.568244] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:41.568688] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:41.576561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:42.576855] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:42.584739] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:43.585015] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:43.592907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:44.593206] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:44.601386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:45.601699] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:45.615029] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:46.615362] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:46.622898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:47.623253] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:47.630710] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:48.630962] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:48.640688] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:49.641138] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:49.648927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:50.649233] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:50.656936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:51.657354] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:51.664842] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:52.665236] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:52.674867] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:53.675371] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:53.683135] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:54.683630] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:54.691500] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:55.691754] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:55.699432] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:56.699778] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:56.707847] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:57.708131] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:57.715458] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:58.715732] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:58.723596] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:59.723988] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:59.732244] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:00.732756] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:00.741125] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:01.741604] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:01.749105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:02.749389] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:02.757791] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:03.758216] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:03.766374] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:04.766705] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:04.774256] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:05.774553] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:05.782422] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:06.782701] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:06.795918] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:07.796342] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:07.803952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:08.804383] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:08.812505] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:09.812873] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:09.820545] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:10.820793] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:10.828831] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:11.829158] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:11.837012] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:12.837374] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:12.844788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:13.845075] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:13.853018] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:14.853359] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:14.861055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:15.861473] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:15.869484] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:16.869946] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:16.877578] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:17.877992] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:17.885464] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:18.885867] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:18.893212] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:19.893516] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:19.901357] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:20.901643] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:20.908971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:21.909422] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:21.918622] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:22.918913] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:22.926391] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:23.926699] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:23.934656] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:24.934949] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:24.942653] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:25.942923] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:25.950736] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:26.950997] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:26.958521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:27.958795] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:27.967014] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:28.967351] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:28.975029] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:29.975358] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:29.982655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:30.982939] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:30.991096] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:31.991380] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:31.999064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:32.999400] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:33.007053] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:34.007392] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:34.014770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:35.015089] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:35.022499] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:36.022757] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:36.030762] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:37.031004] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:37.038744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:38.039068] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:38.047017] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:39.047561] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:39.055714] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:40.055969] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:40.063535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:41.063803] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:41.071514] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:42.071985] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:42.079516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:43.079999] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:43.088433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:44.088918] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:44.097556] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:45.098048] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:45.105897] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:46.106241] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:46.114266] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:47.114578] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:47.122173] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:48.122511] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:48.130259] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:49.130579] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:49.137990] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:50.138352] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:50.147615] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:51.147945] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:51.155997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:52.156533] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:52.165448] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:53.165735] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:53.173369] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:54.173834] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:54.182188] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:55.182724] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:55.190651] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:56.190898] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:56.198472] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:57.198847] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:57.206555] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:58.206982] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:58.215068] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:59.215366] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:59.225349] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:00.225826] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:00.234271] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:01.234620] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:01.242517] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:02.242806] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:02.250975] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:03.251369] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:03.259598] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:04.259974] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:04.267791] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:05.268179] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:05.275939] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:06.276386] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:06.283842] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:07.284171] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:07.293016] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:08.293576] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:08.301450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:09.301883] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:09.309519] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:10.309927] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:10.317932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:11.318219] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:11.325722] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:12.326021] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:12.334119] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:13.334404] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:13.341789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:14.342095] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:14.349804] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:15.350110] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:15.358290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:16.358800] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:16.366414] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:17.366723] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:17.374950] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:18.375372] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:18.382701] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:19.382986] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:19.391528] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:20.391823] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:20.400137] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:21.400658] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:21.408417] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:22.408704] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:22.416797] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:23.417180] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:23.424823] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:24.425116] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:24.432897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:25.433194] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:25.441525] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:26.442111] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:26.450140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:27.450507] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:27.460690] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:28.460988] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:28.468633] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:29.468942] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:29.476486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:30.476841] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:30.485175] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:31.485567] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:31.494423] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:32.494801] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:32.502170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:33.502510] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:33.510066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:34.510351] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:34.518073] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:35.518402] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:35.526926] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:36.527186] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:36.534133] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:37.534477] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:37.541916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:38.542194] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:38.549819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:39.550168] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:39.557775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:40.558029] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:40.565795] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:41.566077] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:41.573774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:42.574182] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:42.581904] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:43.582245] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:43.602381] end - ✅ in 0.020s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:44.602679] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:44.612783] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:45.613131] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:45.623059] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:46.623449] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:46.631340] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:47.631609] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:47.639244] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:48.639623] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:48.647459] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:49.647769] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:49.656263] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:50.656618] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:50.664561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:51.664905] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:51.672663] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:52.673001] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:52.680625] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:53.680965] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:53.689961] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:54.690292] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:54.698077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:55.698344] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:55.705968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:56.706254] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:56.713276] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:57.713618] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:57.721869] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:58.722190] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:58.729875] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:59.730191] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:59.738268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:00.738608] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:00.747875] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:01.748186] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:01.755332] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:02.755656] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:02.763363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:03.763649] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:03.771509] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:04.771818] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:04.779554] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:05.779844] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:05.786964] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:06.787392] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:06.795164] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:07.795551] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:07.802770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:08.803057] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:08.810574] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:09.810873] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:09.818587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:10.819001] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:10.827981] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:11.828278] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:11.926908] end - ✅ in 0.098s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:12.927221] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:12.934539] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:13.935004] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:13.942330] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:14.942655] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:14.950643] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:15.951097] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:16.027579] end - ✅ in 0.076s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:17.027901] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:17.038264] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:18.038619] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:18.049360] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:19.049745] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:19.057954] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:20.058388] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:20.127680] end - ✅ in 0.069s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:21.128000] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:21.227709] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:22.228009] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:22.235857] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:23.236140] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:23.243530] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:24.243846] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:24.252574] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:25.253082] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:25.261363] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:26.261827] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:26.269455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:27.269983] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:27.277350] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:28.277692] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:28.285347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:29.285618] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:29.293337] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:30.293655] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:30.301963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:31.302505] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:31.311951] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:32.312412] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:32.320596] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:33.320871] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:33.329354] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:34.329907] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:34.337970] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:35.338402] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:35.346719] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:36.347036] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:36.355090] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:37.355429] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:37.363085] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:38.363409] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:38.371918] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:39.372168] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:39.380155] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:40.380476] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:40.388080] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:41.388364] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:41.396509] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:42.396955] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:42.404580] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:43.405041] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:43.413051] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:44.413469] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:44.425904] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:45.426351] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:45.435222] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:46.435630] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:46.443895] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:47.444377] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:47.452507] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:48.452993] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:48.460563] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:49.460866] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:49.469226] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:50.469771] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:50.477749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:51.478026] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:51.485536] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:52.486006] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:52.494539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:53.494995] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:53.503021] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:54.503335] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:54.510695] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:55.510983] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:55.519623] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:56.519914] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:56.527389] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:57.527703] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:57.535536] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:58.535966] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:58.542950] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:59.543233] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:59.550917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:00.551266] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:00.559562] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:01.560020] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:01.568046] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:02.568358] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:02.576215] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:03.576740] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:03.585017] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:04.585548] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:04.594056] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:05.594399] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:05.602183] end - ✅ in 0.008s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:04:05.602457] end - ❌ 900.510s: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:05.602672] start - args=(, 'scheduler-configmap-ref-test', 'kserve-ci-e2e-test'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:05.609466] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:04:06.839618] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'scheduler-configmap-ref-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-scheduler-config-305f7a8b'}, [e2e-llm-inference-service] {'name': 'scheduler-with-configmap-ref-sc-67492bcd'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-schedul-b2f159da'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T20:04:06.935737] end - ✅ in 0.096s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_inference_service] [2026-04-24T20:04:06.935851] end - ❌ 901.898s: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:24Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:49:56Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:49:32Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] ---------------------------- Captured log teardown ----------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1399 Deleted ConfigMap scheduler-config-e2e from namespace kserve-ci-e2e-test [e2e-llm-inference-service] _ test_llm_autoscaling_hpa_lws[router-managed-workload-llmd-simulator-lws-scaling-hpa] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-lws', 'scaling-hpa'], prompt='KServe is a', service_nam... {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_hpa [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-lws", [e2e-llm-inference-service] "scaling-hpa", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-hpa-lws", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_multi_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_hpa_lws(test_case: TestCase): [e2e-llm-inference-service] """HPA + LWS: VA and HPA exist; pods scale under load.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:473: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-lws', 'scaling-hpa'], prompt='KServe is a', service_nam... {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...lws-aut-fe7a55cc'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T19:50:43.699323', start_time = 1777060243.6996126 [e2e-llm-inference-service] duration = 900.928884267807, timestamp_end = '2026-04-24T20:05:44.628498' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...lator-lws-aut-fe7a55cc'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cef6aad40> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-hpa-lw-1aa98714 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-hpa-lw-1aa98714 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-hpa-lw-1aa98714 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-lws-aut-fe7a55cc in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-lws-aut-fe7a55cc [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-lws-aut-fe7a55cc [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-hpa-autoscale-hpa-lws-b344a3ff in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-hpa-autoscale-hpa-lws-b344a3ff [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-hpa-autoscale-hpa-lws-b344a3ff [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_hpa_lws] [2026-04-24T19:50:43.626999] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-lws', 'scaling-hpa'], prompt='KServe is a', service_name='autoscale-hpa-lws', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-lw-1aa98714'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-fe7a55cc'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T19:50:43.639176] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-lw-1aa98714'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-fe7a55cc'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T19:50:43.699154] end - ✅ in 0.060s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T19:50:43.699323] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-lw-1aa98714'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-fe7a55cc'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:43.699632] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:43.705454] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:44.705889] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:44.713109] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:45.713391] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:45.721286] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:46.721638] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:46.729126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:47.729502] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:47.737075] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:48.737529] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:48.744986] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:49.745410] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:49.753615] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:50.753902] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:50.761902] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:51.762348] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:51.770269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:52.770784] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:52.779401] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:53.779852] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:53.827481] end - ✅ in 0.047s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:54.827948] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:54.836010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:55.836412] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:55.844757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:56.845185] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:56.852837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:57.853129] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:57.928137] end - ✅ in 0.075s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:58.928444] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:50:58.935816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:50:59.936260] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:00.028024] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:01.028447] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:01.036386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:02.036704] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:02.226928] end - ✅ in 0.190s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:03.227289] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:03.234994] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:04.235346] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:04.244153] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:05.244650] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:05.253821] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:06.254120] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:06.261990] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:07.262284] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:07.269896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:08.270225] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:08.277883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:09.278159] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:09.285950] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:10.286224] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:10.293956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:11.294273] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:11.302395] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:12.302864] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:12.311089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:13.311386] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:13.319737] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:14.320168] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:14.328734] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:15.329173] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:15.338067] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:16.338502] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:16.346505] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:17.346778] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:17.354902] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:18.355331] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:18.362853] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:19.363135] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:19.371745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:20.372055] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:20.380103] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:21.380607] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:21.388908] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:22.389258] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:22.398189] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:23.398509] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:23.427920] end - ✅ in 0.029s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:24.428260] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:24.436485] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:25.436929] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:25.445660] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:26.446095] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:26.454213] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:27.454732] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:27.462206] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:28.462543] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:28.470236] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:29.470547] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:29.478440] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:30.478686] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:30.486369] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:31.486645] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:31.495086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:32.495713] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:32.503974] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:33.504406] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:33.512364] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:34.512786] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:34.520927] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:35.521370] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:35.529903] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:36.530333] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:36.538410] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:37.538692] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:37.546977] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:38.547275] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:38.555022] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:39.555348] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:39.563547] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:40.564015] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:40.572191] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:41.572545] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:41.582760] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:42.583178] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:42.591714] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:43.592203] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:43.600525] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:44.600955] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:44.610024] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:45.610494] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:45.618946] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:46.619433] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:46.628260] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:47.628733] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:47.637130] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:48.637689] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:48.645461] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:49.645963] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:49.653656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:50.653953] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:50.661867] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:51.662291] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:51.670140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:52.670603] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:52.678277] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:53.678760] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:53.686591] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:54.687043] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:54.695533] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:55.695960] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:55.703733] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:56.704191] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:56.712812] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:57.713161] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:57.722079] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:58.722462] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:58.730348] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:51:59.730634] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:51:59.738770] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:00.739065] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:00.747140] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:01.747647] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:01.755816] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:02.756230] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:02.764575] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:03.764864] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:03.772881] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:04.773269] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:04.781341] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:05.781747] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:05.789528] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:06.789843] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:06.798386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:07.798849] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:07.806742] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:08.807057] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:08.815908] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:09.816248] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:09.828045] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:10.828378] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:10.836840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:11.837264] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:11.844941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:12.845370] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:12.853293] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:13.853633] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:13.861854] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:14.862268] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:14.870260] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:15.870541] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:15.880004] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:16.880497] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:16.888017] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:17.888530] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:17.897942] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:18.898365] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:18.906628] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:19.906969] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:19.915420] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:20.915894] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:20.924578] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:21.924991] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:21.933241] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:22.933705] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:22.942029] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:23.942348] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:23.951129] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:24.951409] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:24.959094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:25.959379] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:25.967673] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:26.967983] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:26.975948] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:27.976410] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:27.989882] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:28.990376] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:29.003901] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:30.004357] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:30.012898] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:31.013420] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:31.021606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:32.021909] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:32.029984] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:33.030373] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:33.038568] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:34.038897] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:34.046667] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:35.046962] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:35.054823] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:36.055106] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:36.062778] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:37.063076] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:37.077359] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:38.077690] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:38.086161] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:39.086633] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:39.094257] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:40.094544] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:40.102801] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:41.103252] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:41.111058] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:42.111477] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:42.119643] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:43.120022] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:43.127570] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:44.127878] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:44.137227] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:45.137687] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:45.145658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:46.145921] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:46.153472] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:47.153787] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:47.162553] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:48.162911] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:48.171500] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:49.171944] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:49.179907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:50.180193] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:50.188095] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:51.188421] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:51.195523] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:52.195838] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:52.204286] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:53.204738] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:53.212896] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:54.213288] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:54.221371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:55.221690] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:55.229715] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:56.230179] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:56.237851] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:57.238165] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:57.246443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:58.246756] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:58.254893] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:52:59.255370] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:52:59.263195] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:00.263583] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:00.271396] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:01.271690] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:01.279563] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:02.279903] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:02.287618] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:03.287886] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:03.295561] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:04.295821] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:04.302980] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:05.303260] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:05.310951] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:06.311231] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:06.318854] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:07.319116] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:07.327105] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:08.327499] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:08.335867] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:09.336336] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:09.347270] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:10.347575] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:10.355359] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:11.355644] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:11.365560] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:12.365878] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:12.373580] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:13.373890] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:13.382537] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:14.382797] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:14.391141] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:15.391658] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:15.400675] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:16.400947] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:16.409527] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:17.410023] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:17.418082] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:18.418512] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:18.427960] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:19.428219] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:19.436558] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:20.437018] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:20.445498] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:21.445959] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:21.454205] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:22.454517] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:22.463062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:23.463376] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:23.474638] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:24.475087] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:24.482770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:25.483219] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:25.491579] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:26.491971] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:26.499552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:27.499887] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:27.507602] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:28.507884] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:28.516356] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:29.516737] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:29.524424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:30.524718] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:30.532921] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:31.533267] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:31.541557] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:32.541947] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:32.549879] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:33.550170] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:33.558454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:34.558906] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:34.567532] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:35.567853] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:35.576814] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:36.577190] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:36.585082] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:37.585373] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:37.593596] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:38.594005] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:38.601599] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:39.601889] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:39.610086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:40.610658] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:40.619382] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:41.619651] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:41.627901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:42.628339] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:42.635514] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:43.635849] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:43.644074] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:44.644586] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:44.654945] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:45.655496] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:45.663769] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:46.664062] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:46.675143] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:47.675507] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:47.683058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:48.683353] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:48.691043] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:49.691285] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:49.698689] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:50.699000] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:50.708945] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:51.709380] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:51.717718] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:52.718198] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:52.726614] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:53.727078] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:53.734691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:54.735005] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:54.742816] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:55.743279] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:55.751982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:56.752430] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:56.765530] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:57.765816] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:57.775387] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:58.775765] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:58.783648] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:53:59.784155] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:53:59.791936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:00.792367] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:00.800268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:01.800797] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:01.809052] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:02.809499] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:02.817569] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:03.817885] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:03.826509] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:04.826846] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:04.835016] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:05.835345] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:05.843344] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:06.843635] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:06.851835] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:07.852333] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:07.860437] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:08.860719] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:08.868808] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:09.869098] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:09.877284] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:10.877710] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:10.885510] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:11.885792] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:11.893583] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:12.893877] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:12.902049] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:13.902556] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:13.911627] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:14.911920] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:14.920027] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:15.920553] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:15.929460] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:16.929946] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:16.938090] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:17.938573] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:17.946529] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:18.946942] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:18.954654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:19.955113] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:19.962867] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:20.963338] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:20.970952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:21.971360] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:21.979349] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:22.979707] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:22.987981] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:23.988519] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:23.997283] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:24.997646] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:25.005277] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:26.005726] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:26.014094] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:27.014615] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:27.022681] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:28.022977] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:28.031292] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:29.031817] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:29.040131] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:30.040636] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:30.049003] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:31.049351] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:31.056973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:32.057212] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:32.065962] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:33.066256] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:33.075662] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:34.076128] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:34.084546] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:35.084989] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:35.093199] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:36.093497] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:36.101385] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:37.101676] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:37.109481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:38.109783] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:38.117842] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:39.118283] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:39.126580] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:40.126856] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:40.134972] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:41.135417] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:41.143653] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:42.143931] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:42.151503] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:43.151860] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:43.159814] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:44.160124] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:44.168091] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:45.168551] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:45.176481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:46.176963] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:46.184800] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:47.185102] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:47.194476] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:48.194935] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:48.203111] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:49.203393] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:49.211243] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:50.211537] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:50.219054] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:51.219334] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:51.227318] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:52.227583] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:52.235660] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:53.236066] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:53.244649] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:54.245043] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:54.253153] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:55.253499] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:55.262069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:56.262596] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:56.270709] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:57.270995] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:57.278773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:58.279049] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:58.286685] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:54:59.287008] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:54:59.294886] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:00.295173] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:00.303080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:01.303368] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:01.311389] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:02.311803] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:02.320152] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:03.320555] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:03.328547] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:04.328984] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:04.337060] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:05.337419] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:05.345514] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:06.345898] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:06.354507] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:07.354795] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:07.362557] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:08.362838] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:08.372595] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:09.372834] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:09.380732] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:10.381061] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:10.388680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:11.389089] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:11.397071] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:12.397364] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:12.405743] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:13.406032] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:13.414797] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:14.415210] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:14.423081] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:15.423533] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:15.432236] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:16.432610] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:16.440823] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:17.441346] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:17.449718] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:18.450182] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:18.458088] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:19.458430] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:19.466908] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:20.467370] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:20.475031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:21.475353] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:21.483479] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:22.483808] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:22.491933] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:23.492421] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:23.500377] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:24.500741] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:24.511431] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:25.511925] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:25.521142] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:26.521685] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:26.530060] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:27.530514] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:27.538827] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:28.539100] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:28.546773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:29.547095] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:29.554882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:30.555213] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:30.563840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:31.564213] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:31.573805] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:32.574226] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:32.582220] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:33.582527] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:33.590335] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:34.590587] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:34.598464] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:35.598757] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:35.606500] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:36.606786] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:36.614810] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:37.615292] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:37.623107] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:38.623625] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:38.631774] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:39.632176] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:39.640467] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:40.640866] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:40.649386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:41.649911] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:41.658025] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:42.658527] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:42.667053] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:43.667483] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:43.675237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:44.675563] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:44.686069] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:45.686350] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:45.694082] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:46.694365] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:46.702194] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:47.702506] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:47.710337] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:48.710617] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:48.718973] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:49.719391] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:49.727257] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:50.727608] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:50.735817] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:51.736131] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:51.744600] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:52.744880] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:52.753646] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:53.754153] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:53.762037] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:54.762378] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:54.770437] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:55.770717] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:55.779002] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:56.779463] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:56.788276] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:57.788797] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:57.796926] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:58.797187] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:58.805207] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:55:59.805651] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:55:59.813353] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:00.813787] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:00.821940] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:01.822313] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:01.830056] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:02.830439] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:02.838386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:03.838675] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:03.846928] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:04.847422] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:04.855964] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:05.856418] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:05.864243] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:06.864690] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:06.872649] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:07.872935] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:07.880728] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:08.881025] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:08.889662] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:09.889952] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:09.897614] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:10.897989] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:10.906457] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:11.906933] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:11.915064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:12.915367] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:12.923876] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:13.924207] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:13.932775] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:14.933221] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:14.941365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:15.941838] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:15.951352] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:16.951682] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:16.959876] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:17.960370] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:17.968778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:18.969227] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:18.977450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:19.977772] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:19.986016] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:20.986496] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:20.994997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:21.995262] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:22.002863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:23.003172] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:23.011113] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:24.011480] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:24.019810] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:25.020056] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:25.029048] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:26.029401] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:26.037651] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:27.038074] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:27.047379] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:28.047832] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:28.056058] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:29.056385] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:29.064835] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:30.065146] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:30.073091] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:31.073679] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:31.081445] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:32.081778] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:32.089380] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:33.089667] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:33.097618] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:34.098027] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:34.106355] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:35.106689] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:35.114595] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:36.114849] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:36.122275] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:37.122611] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:37.131339] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:38.131661] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:38.139660] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:39.139965] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:39.148433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:40.148852] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:40.156552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:41.156944] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:41.164902] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:42.165207] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:42.173233] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:43.173573] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:43.181970] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:44.182322] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:44.190655] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:45.190921] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:45.198674] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:46.199127] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:46.207485] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:47.207889] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:47.216062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:48.216527] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:48.235670] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:49.236082] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:49.244010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:50.244280] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:50.252612] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:51.252886] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:51.261189] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:52.261629] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:52.270756] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:53.271055] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:53.278500] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:54.278758] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:54.286175] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:55.286523] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:55.294278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:56.294771] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:56.302732] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:57.303165] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:57.311093] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:58.311369] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:58.319696] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:56:59.320203] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:56:59.328555] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:00.329007] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:00.337061] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:01.337477] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:01.345916] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:02.346389] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:02.354850] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:03.355353] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:03.363628] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:04.363906] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:04.377049] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:05.377604] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:05.386478] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:06.386798] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:06.394703] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:07.395046] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:07.403567] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:08.404064] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:08.412541] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:09.412971] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:09.420631] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:10.421064] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:10.428484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:11.428761] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:11.436708] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:12.437155] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:12.445246] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:13.445583] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:13.453423] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:14.453844] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:14.461775] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:15.462194] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:15.470384] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:16.470681] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:16.478356] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:17.478803] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:17.486574] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:18.486863] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:18.495098] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:19.495603] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:19.507408] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:20.507788] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:20.516516] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:21.517027] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:21.524882] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:22.525231] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:22.533834] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:23.534116] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:23.542471] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:24.542934] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:24.552177] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:25.552641] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:25.560437] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:26.560891] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:26.568976] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:27.569350] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:27.578250] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:28.578549] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:28.588215] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:29.588523] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:29.597332] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:30.597622] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:30.605724] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:31.606164] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:31.613828] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:32.614222] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:32.622358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:33.622734] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:33.630978] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:34.631482] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:34.639901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:35.640333] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:35.647787] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:36.648068] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:36.656043] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:37.656499] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:37.664450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:38.664894] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:38.672385] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:39.672836] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:39.680696] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:40.681123] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:40.689031] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:41.689347] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:41.697887] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:42.698365] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:42.706394] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:43.706652] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:43.714989] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:44.715420] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:44.723836] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:45.724292] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:45.732563] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:46.732871] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:46.741107] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:47.741608] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:47.749522] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:48.749791] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:48.756956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:49.757264] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:49.765649] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:50.765969] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:50.773868] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:51.774150] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:51.782391] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:52.782840] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:52.791134] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:53.791478] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:53.798966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:54.799375] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:54.806801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:55.807131] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:55.815552] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:56.815947] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:56.823856] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:57.824355] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:57.832197] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:58.832524] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:58.840871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:57:59.841373] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:57:59.849733] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:00.850131] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:00.858482] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:01.858894] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:01.867573] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:02.867899] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:02.876055] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:03.876556] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:03.884568] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:04.884914] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:04.893009] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:05.893371] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:05.902230] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:06.902684] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:06.912511] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:07.912967] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:07.921709] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:08.922189] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:08.930234] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:09.930574] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:09.938627] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:10.939080] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:10.947652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:11.948092] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:11.956107] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:12.956516] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:12.965019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:13.965588] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:13.973440] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:14.973731] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:14.982139] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:15.982639] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:15.990775] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:16.991141] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:16.998737] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:17.999029] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:18.007262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:19.007578] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:19.015842] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:20.016140] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:20.024115] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:21.024676] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:21.032012] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:22.032347] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:22.040795] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:23.041348] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:23.049480] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:24.049828] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:24.058272] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:25.058582] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:25.066187] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:26.066487] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:26.074326] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:27.074592] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:27.081954] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:28.082243] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:28.090649] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:29.090943] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:29.098942] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:30.099423] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:30.107613] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:31.107893] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:31.117180] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:32.117695] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:32.125457] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:33.125838] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:33.133778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:34.134084] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:34.142542] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:35.142910] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:35.150646] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:36.150976] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:36.159427] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:37.159881] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:37.168731] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:38.169177] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:38.178275] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:39.178789] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:39.186955] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:40.187376] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:40.195235] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:41.195824] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:41.205170] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:42.205826] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:42.213810] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:43.214280] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:43.226492] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:44.226779] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:44.235089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:45.235622] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:45.243713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:46.243998] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:46.252692] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:47.253221] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:47.261994] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:48.262368] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:48.270021] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:49.270363] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:49.278406] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:50.278794] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:50.286997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:51.287406] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:51.295487] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:52.295935] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:52.304562] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:53.305024] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:53.313190] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:54.313532] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:54.321054] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:55.321410] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:55.329665] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:56.329997] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:56.337983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:57.338455] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:57.346002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:58.346278] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:58.354245] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:58:59.354678] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:58:59.362681] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:00.362979] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:00.370920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:01.371184] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:01.378990] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:02.379290] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:02.387335] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:03.387592] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:03.395539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:04.395883] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:04.403947] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:05.404227] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:05.413090] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:06.413417] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:06.421155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:07.421610] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:07.430032] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:08.430364] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:08.438927] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:09.439346] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:09.447853] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:10.448339] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:10.456976] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:11.457432] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:11.465417] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:12.465857] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:12.473438] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:13.473735] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:13.483454] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:14.483754] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:14.492187] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:15.492704] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:15.501077] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:16.501605] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:16.509742] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:17.510027] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:17.518236] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:18.518767] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:18.526913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:19.527359] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:19.535038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:20.535359] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:20.543134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:21.543419] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:21.553272] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:22.553756] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:22.561631] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:23.562078] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:23.570220] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:24.570529] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:24.579378] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:25.579668] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:25.587473] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:26.587784] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:26.595478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:27.595759] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:27.603562] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:28.603840] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:28.611638] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:29.611912] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:29.620033] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:30.620502] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:30.629029] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:31.629371] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:31.637350] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:32.637828] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:32.646402] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:33.646714] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:33.654677] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:34.655095] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:34.663007] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:35.663364] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:35.671249] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:36.671564] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:36.679002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:37.679346] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:37.687404] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:38.687713] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:38.695654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:39.696064] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:39.703837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:40.704345] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:40.712476] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:41.712763] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:41.720558] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:42.720844] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:42.729852] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:43.730143] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:43.737873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:44.738188] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:44.746197] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:45.746701] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:45.754901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:46.755162] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:46.762821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:47.763156] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:47.771403] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:48.771861] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:48.780754] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:49.781132] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:49.788577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:50.789038] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:50.797453] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:51.797880] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:51.805689] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:52.806169] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:52.816217] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:53.816536] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:53.824738] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:54.825160] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:54.832471] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:55.832787] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:55.841387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:56.841806] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:56.850468] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:57.850858] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:57.859065] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:58.859574] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:58.867591] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T19:59:59.867888] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T19:59:59.876099] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:00.876631] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:00.884923] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:01.885271] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:01.893649] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:02.894059] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:02.902039] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:03.902363] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:03.910884] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:04.911237] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:04.919514] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:05.920022] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:05.927960] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:06.928277] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:06.936615] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:07.936938] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:07.944754] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:08.945075] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:08.952986] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:09.953474] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:09.961203] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:10.961505] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:10.969587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:11.969999] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:11.977710] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:12.977995] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:12.986021] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:13.986566] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:13.995818] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:14.996109] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:15.004272] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:16.004832] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:16.012653] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:17.013020] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:17.021132] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:18.021545] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:18.029700] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:19.030046] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:19.038237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:20.038574] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:20.046805] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:21.047246] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:21.055218] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:22.055537] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:22.063585] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:23.063990] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:23.072241] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:24.072583] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:24.081193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:25.081621] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:25.089963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:26.090421] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:26.098574] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:27.098855] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:27.106625] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:28.106922] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:28.114710] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:29.114969] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:29.123215] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:30.123687] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:30.131534] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:31.131990] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:31.142165] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:32.142600] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:32.150149] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:33.150499] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:33.158917] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:34.159267] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:34.167430] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:35.167835] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:35.175366] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:36.175706] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:36.183427] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:37.183858] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:37.191405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:38.191702] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:38.199170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:39.199478] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:39.206853] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:40.207135] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:40.215108] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:41.215606] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:41.223652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:42.223958] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:42.232012] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:43.232326] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:43.239923] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:44.240288] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:44.248792] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:45.249094] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:45.257087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:46.257548] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:46.265632] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:47.265943] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:47.274287] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:48.274792] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:48.282748] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:49.283206] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:49.291095] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:50.291480] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:50.300797] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:51.301273] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:51.309565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:52.309910] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:52.318491] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:53.318794] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:53.327231] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:54.327668] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:54.335966] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:55.336579] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:55.345159] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:56.345624] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:56.356745] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:57.357056] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:57.365139] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:58.365682] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:58.374389] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:00:59.374857] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:00:59.382997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:00.383285] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:00.392120] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:01.392586] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:01.400801] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:02.401233] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:02.413682] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:03.414120] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:03.422672] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:04.423105] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:04.430992] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:05.431396] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:05.440105] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:06.440393] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:06.449290] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:07.449673] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:07.458410] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:08.458731] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:08.466561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:09.466851] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:09.475158] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:10.475664] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:10.483269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:11.483765] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:11.493624] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:12.493960] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:12.502217] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:13.502737] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:13.511013] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:14.511327] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:14.519629] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:15.520052] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:15.533902] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:16.534320] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:16.542262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:17.542560] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:17.551186] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:18.551448] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:18.558789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:19.559234] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:19.567231] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:20.567561] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:20.575595] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:21.576006] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:21.583705] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:22.584121] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:22.592483] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:23.592771] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:23.600671] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:24.600929] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:24.609469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:25.609926] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:25.618617] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:26.619014] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:26.627264] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:27.627759] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:27.636010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:28.636423] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:28.644211] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:29.644671] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:29.652799] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:30.653169] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:30.661871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:31.662244] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:31.670127] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:32.670422] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:32.678010] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:33.678355] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:33.686354] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:34.686629] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:34.694845] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:35.695342] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:35.703486] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:36.703794] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:36.712128] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:37.712611] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:37.721150] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:38.721666] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:38.729776] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:39.730081] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:39.737965] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:40.738376] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:40.746625] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:41.746972] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:41.755032] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:42.755353] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:42.763424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:43.763854] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:43.771944] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:44.772271] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:44.781277] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:45.781751] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:45.789800] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:46.790252] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:46.798349] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:47.798665] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:47.806705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:48.807189] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:48.815362] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:49.815659] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:49.825021] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:50.825516] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:50.834212] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:51.834603] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:51.843132] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:52.843654] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:52.851885] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:53.852339] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:53.860197] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:54.860693] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:54.869095] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:55.869418] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:55.876768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:56.877052] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:56.885144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:57.885621] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:57.893719] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:58.894107] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:58.901600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:01:59.902066] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:01:59.912073] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:00.912385] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:00.920485] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:01.920902] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:01.928939] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:02.929222] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:02.937277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:03.937585] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:03.947090] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:04.947405] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:04.954814] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:05.955116] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:05.963018] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:06.963534] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:06.974975] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:07.975441] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:07.983716] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:08.984219] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:08.992507] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:09.992922] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:10.001281] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:11.001826] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:11.009368] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:12.009701] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:12.017758] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:13.018186] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:13.026807] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:14.027279] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:14.036637] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:15.037116] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:15.045595] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:16.045945] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:16.053887] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:17.054351] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:17.062904] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:18.063368] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:18.071361] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:19.071656] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:19.079967] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:20.080449] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:20.088206] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:21.088589] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:21.095966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:22.096285] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:22.104738] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:23.105025] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:23.113411] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:24.113857] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:24.122238] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:25.122708] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:25.130882] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:26.131162] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:26.138984] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:27.139383] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:27.148046] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:28.148311] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:28.155907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:29.156195] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:29.164316] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:30.164636] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:30.172481] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:31.172868] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:31.180862] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:32.181358] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:32.189526] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:33.189828] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:33.197879] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:34.198354] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:34.206665] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:35.206950] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:35.214743] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:36.215024] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:36.223061] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:37.223614] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:37.232380] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:38.232906] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:38.240762] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:39.241049] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:39.249779] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:40.250066] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:40.258070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:41.258374] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:41.266442] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:42.266724] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:42.275282] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:43.275811] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:43.283572] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:44.284040] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:44.292386] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:45.292789] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:45.300895] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:46.301346] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:46.309873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:47.310344] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:47.319441] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:48.319735] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:48.327712] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:49.328160] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:49.336907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:50.337288] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:50.345528] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:51.345838] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:51.353630] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:52.353893] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:52.361762] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:53.362077] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:53.370636] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:54.371000] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:54.378745] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:55.379098] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:55.387946] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:56.388219] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:56.395948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:57.396264] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:57.404468] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:58.404875] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:58.413744] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:02:59.414251] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:02:59.424265] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:00.424567] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:00.432789] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:01.433259] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:01.441402] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:02.441719] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:02.449442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:03.449735] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:03.457778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:04.458186] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:04.466183] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:05.466681] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:05.475336] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:06.475671] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:06.483801] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:07.484086] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:07.491834] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:08.492122] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:08.500168] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:09.500635] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:09.509060] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:10.509487] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:10.527638] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:11.528004] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:11.627780] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:12.628158] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:12.636056] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:13.636359] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:13.644279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:14.644601] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:14.652561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:15.652977] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:15.660966] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:16.661473] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:16.669504] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:17.669796] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:17.677486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:18.677763] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:18.685267] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:19.685723] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:19.693123] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:20.693441] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:20.700833] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:21.701136] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:21.710942] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:22.711394] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:22.719539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:23.720000] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:23.727889] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:24.728181] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:24.735689] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:25.736153] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:25.827594] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:26.828141] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:26.927529] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:27.927890] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:28.027394] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:29.027714] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:29.035348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:30.035638] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:30.044008] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:31.044473] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:31.052191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:32.052521] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:32.061380] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:33.061709] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:33.070614] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:34.071120] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:34.079084] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:35.079421] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:35.087501] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:36.087827] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:36.095644] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:37.096000] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:37.104187] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:38.104549] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:38.112871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:39.113158] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:39.121262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:40.121751] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:40.129746] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:41.130065] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:41.138424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:42.138910] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:42.147174] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:43.147494] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:43.155857] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:44.156372] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:44.165703] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:45.166137] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:45.174113] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:46.174598] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:46.182529] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:47.182824] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:47.190652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:48.190952] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:48.199202] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:49.199768] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:49.208425] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:50.208812] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:50.217759] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:51.218081] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:51.226290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:52.226803] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:52.235048] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:53.235362] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:53.243080] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:54.243362] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:54.252712] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:55.253158] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:55.261177] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:56.261493] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:56.269686] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:57.270143] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:57.278572] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:58.279049] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:58.286813] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:03:59.287263] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:03:59.294891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:00.295371] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:00.303501] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:01.303777] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:01.312105] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:02.312627] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:02.321034] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:03.321556] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:03.329689] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:04.329974] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:04.337742] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:05.338048] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:05.346971] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:06.347236] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:06.354230] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:07.354546] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:07.427988] end - ✅ in 0.073s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:08.428286] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:08.439081] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:09.439347] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:09.446820] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:10.447198] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:10.528029] end - ✅ in 0.080s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:11.528393] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:11.627546] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:12.627889] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:12.727968] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:13.728351] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:13.828279] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:14.828701] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:14.927694] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:15.928074] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:15.935607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:16.935912] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:16.943901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:17.944368] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:17.952581] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:18.952878] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:18.960143] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:19.960499] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:19.968706] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:20.968970] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:20.976625] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:21.977053] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:21.985277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:22.985851] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:22.993336] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:23.993699] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:24.001737] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:25.002200] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:25.012902] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:26.013391] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:26.021348] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:27.021647] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:27.030086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:28.030653] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:28.038635] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:29.039124] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:29.047041] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:30.047574] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:30.055289] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:31.055605] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:31.063594] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:32.063882] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:32.071654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:33.071933] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:33.079592] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:34.079972] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:34.088994] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:35.089412] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:35.097388] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:36.097902] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:36.105669] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:37.106016] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:37.114685] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:38.114972] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:38.123152] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:39.123487] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:39.131457] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:40.131894] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:40.139882] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:41.140352] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:41.148429] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:42.148711] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:42.155898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:43.156190] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:43.163486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:44.163795] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:44.227382] end - ✅ in 0.063s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:45.227696] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:45.327869] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:46.328356] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:46.427719] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:47.427994] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:47.436908] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:48.437357] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:48.445168] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:49.445514] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:49.453914] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:50.454190] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:50.463136] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:51.463661] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:51.471835] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:52.472115] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:52.480127] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:53.480408] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:53.488704] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:54.489144] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:54.497941] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:55.498350] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:55.528788] end - ✅ in 0.030s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:56.529228] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:56.628086] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:57.628521] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:57.727940] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:58.728482] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:58.827582] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:04:59.827911] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:04:59.837010] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:00.837374] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:00.845490] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:01.845868] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:01.854884] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:02.855153] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:02.862942] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:03.863243] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:03.927840] end - ✅ in 0.064s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:04.928155] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:04.936647] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:05.937076] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:05.945840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:06.946183] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:06.954543] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:07.955005] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:07.963739] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:08.964210] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:08.972476] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:09.972867] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:09.981218] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:10.981561] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:10.989882] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:11.990222] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:11.998002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:12.998527] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:13.008408] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:14.008736] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:14.016760] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:15.017181] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:15.025482] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:16.025879] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:16.034041] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:17.034347] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:17.043597] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:18.043998] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:18.053895] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:19.054198] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:19.062279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:20.062643] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:20.070366] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:21.070909] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:21.078925] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:22.079340] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:22.088445] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:23.088858] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:23.096913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:24.097212] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:24.105978] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:25.106525] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:25.115191] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:26.115636] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:26.123682] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:27.124085] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:27.132535] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:28.132841] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:28.228183] end - ✅ in 0.095s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:29.228688] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:29.238807] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:30.239103] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:30.327770] end - ✅ in 0.088s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:31.328216] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:31.433734] end - ✅ in 0.105s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:32.434249] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:32.527647] end - ✅ in 0.093s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:33.527977] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:33.628456] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:34.628948] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:34.727841] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:35.728235] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:35.735936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:36.736282] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:36.744432] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:37.744834] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:37.829506] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:38.829940] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:39.028715] end - ✅ in 0.198s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:40.029166] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:40.234438] end - ✅ in 0.205s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:41.234785] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:41.428019] end - ✅ in 0.193s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:42.428563] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:42.436873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:43.437243] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:43.527363] end - ✅ in 0.090s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:44.527791] start - args=(, 'autoscale-hpa-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:44.628193] end - ✅ in 0.100s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:05:44.628498] end - ❌ 900.929s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:05:44.628728] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-hpa-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-hpa-lw-1aa98714'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-fe7a55cc'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-hpa-lws-b344a3ff'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T20:05:44.827805] end - ✅ in 0.199s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_hpa_lws] [2026-04-24T20:05:44.827882] end - ❌ 901.201s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T19:50:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-hpa-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_stop_feature[router-managed-workload-single-cpu-model-fb-opt-125m] __ [e2e-llm-inference-service] [gw1] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-single-cpu', 'model-fb-opt-125m'], prompt='KServe is a', service_name=... {'name': 'model-fb-opt-125m-stop-feature-8f213f2f'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.asyncio(loop_scope="session") [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-single-cpu", [e2e-llm-inference-service] "model-fb-opt-125m", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="stop-feature-test", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[pytest.mark.cluster_cpu, pytest.mark.cluster_single_node], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_stop_feature(test_case: TestCase): [e2e-llm-inference-service] """Test that stopping an LLMInferenceService sets the Ready condition to False with reason Stopped.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = KServeClient( [e2e-llm-inference-service] config_file=os.environ.get("KUBECONFIG", "~/.kube/config"), [e2e-llm-inference-service] client_configuration=client.Configuration(), [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] test_failed = False [e2e-llm-inference-service] [e2e-llm-inference-service] # Disable auth for this test [e2e-llm-inference-service] if not test_case.llm_service.metadata.annotations: [e2e-llm-inference-service] test_case.llm_service.metadata.annotations = {} [e2e-llm-inference-service] test_case.llm_service.metadata.annotations[ [e2e-llm-inference-service] "security.opendatahub.io/enable-auth" [e2e-llm-inference-service] ] = "false" [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] # Create the service [e2e-llm-inference-service] print(f"Creating LLMInferenceService {service_name}") [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] [e2e-llm-inference-service] # Wait for the service to be ready [e2e-llm-inference-service] print(f"Waiting for LLMInferenceService {service_name} to be ready") [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service_stop.py:88: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...featur-e7f09208'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-stop-feature-8f213f2f'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:05:30.385348', start_time = 1777061130.3857853 [e2e-llm-inference-service] duration = 900.6426417827606, timestamp_end = '2026-04-24T20:20:31.028427' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security....-stop-featur-e7f09208'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-stop-feature-8f213f2f'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f60bdda4c20> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-stop-feature-tes-eeaa3ea3 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-stop-feature-tes-eeaa3ea3 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-stop-feature-tes-eeaa3ea3 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-single-cpu-stop-featur-e7f09208 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-single-cpu-stop-featur-e7f09208 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-single-cpu-stop-featur-e7f09208 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig model-fb-opt-125m-stop-feature-8f213f2f in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig model-fb-opt-125m-stop-feature-8f213f2f [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig model-fb-opt-125m-stop-feature-8f213f2f [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_stop_feature] [2026-04-24T20:05:30.328147] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-single-cpu', 'model-fb-opt-125m'], prompt='KServe is a', service_name='stop-feature-test', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'stop-feature-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-stop-feature-tes-eeaa3ea3'}, [e2e-llm-inference-service] {'name': 'workload-single-cpu-stop-featur-e7f09208'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-stop-feature-8f213f2f'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:05:30.340802] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'stop-feature-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-stop-feature-tes-eeaa3ea3'}, [e2e-llm-inference-service] {'name': 'workload-single-cpu-stop-featur-e7f09208'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-stop-feature-8f213f2f'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:05:30.385140] end - ✅ in 0.044s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:05:30.385348] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': {'security.opendatahub.io/enable-auth': 'false'}, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'stop-feature-test', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-stop-feature-tes-eeaa3ea3'}, [e2e-llm-inference-service] {'name': 'workload-single-cpu-stop-featur-e7f09208'}, [e2e-llm-inference-service] {'name': 'model-fb-opt-125m-stop-feature-8f213f2f'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:30.385790] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:30.426680] end - ✅ in 0.041s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:31.426903] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:31.627378] end - ✅ in 0.200s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:32.627815] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:32.927026] end - ✅ in 0.299s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:33.927351] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:33.934295] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:34.934751] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:34.941282] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:35.941773] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:35.949059] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:36.949349] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:36.956989] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:37.957250] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:38.127025] end - ✅ in 0.170s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:39.127357] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:39.228209] end - ✅ in 0.101s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:40.228590] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:40.427803] end - ✅ in 0.199s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:41.428278] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:41.527691] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:42.528032] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:42.535877] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:43.536210] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:43.543174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:44.543510] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:44.627906] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:45.628223] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:45.728629] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:46.728967] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:46.737779] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'severity': 'Info', 'status': 'False', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'The following HTTPRoutes are not ready: [kserve-ci-e2e-test/stop-feature-test-kserve-route: "False" (reason "BackendNotFound", message "backend(stop-feature-test-inference-pool-ip-e0c19087.kserve-ci-e2e-test.svc.cluster.local) not found")]', 'reason': 'HTTPRoutesNotReady', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'message': 'Deployment rollout in progress', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'reason': 'Progressing', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:47.738087] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:47.748879] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:48.749184] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:48.827179] end - ✅ in 0.078s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:49.827527] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:49.927540] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:50.927863] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:50.935179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:51.935474] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:51.942899] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:52.943281] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:52.951031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:53.951359] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:54.027408] end - ✅ in 0.076s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:55.027702] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:55.050711] end - ✅ in 0.023s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:56.051075] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:56.059178] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:57.059619] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:57.067255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:58.067630] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:58.074763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:59.075086] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:59.086059] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:00.086583] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:00.094654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:01.095016] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:01.102168] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:02.102497] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:02.111019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:03.111342] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:03.119840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:04.120143] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:04.128174] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:05.128479] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:05.135994] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:06.136329] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:06.144030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:07.144374] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:07.157975] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:08.158409] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:08.227864] end - ✅ in 0.069s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:09.228385] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:09.236421] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:10.236677] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:10.244192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:11.244536] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:11.252686] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:12.252974] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:12.261281] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:13.261746] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:13.269589] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:14.270022] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:14.277403] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:15.277889] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:15.285483] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:16.285940] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:16.293882] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:17.294166] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:17.326898] end - ✅ in 0.032s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:18.327195] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:18.334929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:19.335352] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:19.343295] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:20.343805] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:20.427973] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:21.428351] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:21.436364] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:22.436682] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:22.444697] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:23.445069] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:23.452274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:24.452579] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:24.459802] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:25.460082] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:25.468193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:26.468643] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:26.476678] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:27.477144] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:27.484606] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:28.484905] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:28.493072] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:29.493576] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:29.500972] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:30.501366] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:30.509654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:31.510147] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:31.518540] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:32.518853] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:32.527193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:33.527466] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:33.534628] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:34.534924] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:34.543076] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:35.543399] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:35.552210] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:36.552492] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:36.561172] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:37.561524] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:37.569951] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:38.570240] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:38.578259] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:39.578685] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:39.587693] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:40.587946] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:40.595757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:41.596033] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:41.603750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:42.604048] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:42.611875] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:43.612185] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:43.620114] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:44.620424] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:44.628565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:45.628996] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:45.636815] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:46.637149] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:46.645046] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:47.645361] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:47.653163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:48.653473] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:48.661147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:49.661629] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:49.669329] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:50.669649] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:50.677510] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:51.677808] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:51.685497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:52.685777] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:52.693331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:53.693625] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:53.701444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:54.701721] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:54.709290] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:55.709676] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:55.716845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:56.717176] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:56.725430] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:57.725888] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:57.741614] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:58.742002] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:58.751017] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:59.751362] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:59.759077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:00.759348] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:00.766591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:01.766906] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:01.774539] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:02.774811] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:02.782776] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:03.783233] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:03.790417] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:04.790687] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:04.798076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:05.798458] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:05.806545] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:06.806828] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:06.815161] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:07.815618] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:07.824172] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:08.824620] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:08.832612] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:09.833077] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:09.840555] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:10.841052] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:10.849194] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:11.849766] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:11.857683] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:12.858226] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:12.865802] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:13.866087] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:13.875088] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:14.875418] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:14.882913] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:15.883157] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:15.890757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:16.891046] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:16.912396] end - ✅ in 0.021s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:17.912755] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:17.920840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:18.921261] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:18.929125] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:19.929372] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:19.936973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:20.937431] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:20.945994] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:21.946396] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:21.954022] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:22.954424] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:22.962598] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:23.962843] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:23.970945] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:24.971197] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:24.979253] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:25.979756] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:25.987768] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:26.988056] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:26.995384] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:27.995630] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:28.003228] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:29.003475] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:29.010845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:30.011124] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:30.018482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:31.018864] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:31.026577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:32.026993] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:32.034840] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:33.035125] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:33.042808] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:34.043108] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:34.051791] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:35.052243] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:35.060188] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:36.060565] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:36.069000] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:37.069499] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:37.076848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:38.077141] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:38.085064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:39.085379] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:39.094121] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:40.094404] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:40.101790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:41.102043] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:41.110361] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:42.110823] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:42.118837] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:43.119131] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:43.127184] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:44.127524] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:44.136070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:45.136534] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:45.144212] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:46.144627] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:46.151728] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:47.152009] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:47.159980] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:48.160361] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:48.170058] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:49.170648] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:49.178901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:50.179325] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:50.186506] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:51.186982] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:51.194137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:52.194475] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:52.202871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:53.203340] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:53.210867] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:54.211139] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:54.218808] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:55.219081] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:55.227102] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:56.227587] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:56.234903] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:57.235157] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:57.242399] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:58.242678] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:58.249971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:59.250210] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:59.257706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:00.257991] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:00.266086] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:01.266355] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:01.273714] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:02.274023] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:02.281940] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:03.282389] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:03.291021] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:04.291344] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:04.319067] end - ✅ in 0.027s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:05.319515] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:05.327553] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:06.327837] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:06.336058] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:07.336555] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:07.344058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:08.344334] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:08.352056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:09.352466] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:09.359801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:10.360122] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:10.367835] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:11.368082] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:11.375535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:12.375955] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:12.384233] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:13.384678] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:13.393972] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:14.394290] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:14.402355] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:15.402625] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:15.410713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:16.411134] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:16.418503] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:17.418889] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:17.426255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:18.426558] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:18.434292] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:19.434566] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:19.442288] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:20.442619] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:20.450204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:21.450607] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:21.457713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:22.458002] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:22.466135] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:23.466389] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:23.474229] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:24.474566] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:24.481700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:25.481956] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:25.489799] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:26.490091] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:26.497978] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:27.498248] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:27.506022] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:28.506290] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:28.514005] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:29.514249] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:29.521870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:30.522256] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:30.529887] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:31.530187] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:31.537778] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:32.538080] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:32.545429] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:33.545966] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:33.554151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:34.554538] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:34.563152] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:35.563503] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:35.570726] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:36.571034] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:36.578955] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:37.579251] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:37.586898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:38.587195] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:38.594763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:39.595116] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:39.602935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:40.603265] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:40.610056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:41.610366] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:41.617445] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:42.617782] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:42.625912] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:43.626208] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:43.633428] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:44.633759] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:44.646591] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:45.646899] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:45.654764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:46.655015] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:46.662232] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:47.662572] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:47.670133] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:48.670500] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:48.677733] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:49.678107] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:49.685744] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:50.686118] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:50.693571] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:51.693924] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:51.701378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:52.701787] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:52.709357] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:53.709671] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:53.717126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:54.717481] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:54.725183] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:55.725560] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:55.733399] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:56.733867] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:56.742371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:57.742792] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:57.750218] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:58.750535] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:58.758256] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:59.758599] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:59.766717] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:00.767053] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:00.775156] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:01.775477] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:01.782687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:02.782989] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:02.790244] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:03.790545] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:03.798589] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:04.798857] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:04.806600] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:05.806875] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:05.814699] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:06.815003] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:06.830858] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:07.831352] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:07.839015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:08.839342] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:08.847638] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:09.848053] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:09.855127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:10.855447] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:10.862908] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:11.863243] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:11.870911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:12.871247] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:12.878873] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:13.879181] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:13.886822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:14.887138] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:14.894696] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:15.895037] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:15.903262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:16.903614] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:16.911524] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:17.911824] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:17.919442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:18.919695] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:18.927184] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:19.927562] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:19.934759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:20.935097] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:20.942828] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:21.943137] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:21.951772] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:22.952037] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:22.959650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:23.959974] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:23.967925] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:24.968201] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:24.975159] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:25.975440] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:25.982698] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:26.982985] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:26.990788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:27.991103] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:27.999015] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:28.999341] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:29.007958] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:30.008374] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:30.016425] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:31.016716] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:31.024644] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:32.024949] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:32.033070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:33.033563] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:33.041564] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:34.041890] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:34.049635] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:35.049967] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:35.057903] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:36.058227] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:36.065935] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:37.066237] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:37.074019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:38.074392] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:38.081728] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:39.082087] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:39.089656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:40.089992] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:40.097977] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:41.098253] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:41.105563] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:42.106024] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:42.113734] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:43.114196] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:43.121274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:44.121572] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:44.129208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:45.129533] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:45.136909] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:46.137222] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:46.144380] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:47.144667] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:47.155417] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:48.155720] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:48.163080] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:49.163397] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:49.170710] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:50.171031] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:50.178237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:51.178600] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:51.185557] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:52.185852] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:52.192823] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:53.193104] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:53.200324] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:54.200580] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:54.208255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:55.208578] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:55.215914] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:56.216202] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:56.223532] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:57.223813] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:57.231289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:58.231624] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:58.238929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:59.239232] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:59.247913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:00.248373] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:00.256002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:01.256340] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:01.263690] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:02.264013] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:02.271390] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:03.271655] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:03.279602] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:04.279888] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:04.288807] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:05.289062] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:05.296352] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:06.296637] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:06.303971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:07.304269] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:07.334996] end - ✅ in 0.030s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:08.335340] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:08.342748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:09.343051] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:09.350534] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:10.350874] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:10.358093] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:11.358353] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:11.365799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:12.366122] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:12.374245] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:13.374535] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:13.392752] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:14.393189] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:14.401139] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:15.401589] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:15.409931] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:16.410318] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:16.417491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:17.417802] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:17.425016] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:18.425354] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:18.433403] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:19.434003] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:19.442169] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:20.442701] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:20.450424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:21.450744] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:21.458515] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:22.458977] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:22.466378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:23.466702] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:23.474044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:24.474344] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:24.481728] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:25.482047] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:25.490195] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:26.490517] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:26.497697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:27.498011] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:27.506380] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:28.506684] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:28.513964] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:29.514237] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:29.521955] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:30.522331] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:30.529572] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:31.529867] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:31.537620] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:32.537873] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:32.545015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:33.545320] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:33.553166] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:34.553517] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:34.562242] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:35.562643] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:35.569996] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:36.570330] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:36.577841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:37.578122] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:37.585567] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:38.585828] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:38.593291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:39.593655] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:39.601603] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:40.601901] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:40.611040] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:41.611329] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:41.619087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:42.619482] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:42.627375] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:43.627625] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:43.635472] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:44.635729] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:44.643208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:45.643578] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:45.651463] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:46.651823] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:46.661355] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:47.661737] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:47.669154] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:48.669473] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:48.676813] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:49.677090] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:49.684756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:50.685021] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:50.692465] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:51.692850] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:51.700964] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:52.701243] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:52.709108] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:53.709420] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:53.717352] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:54.717667] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:54.725439] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:55.725752] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:55.733045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:56.733360] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:56.741277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:57.741630] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:57.749284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:58.749665] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:58.757273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:59.757707] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:59.765834] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:00.766188] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:00.773235] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:01.773705] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:01.781565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:02.781898] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:02.789437] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:03.789906] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:03.798055] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:04.798369] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:04.805875] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:05.806169] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:05.813773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:06.814080] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:06.821418] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:07.821706] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:07.829569] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:08.829890] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:08.837469] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:09.837872] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:09.845533] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:10.845834] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:10.853284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:11.853623] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:11.860812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:12.861087] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:12.868961] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:13.869415] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:13.878825] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:14.879252] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:14.888092] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:15.888381] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:15.896521] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:16.896847] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:16.905260] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:17.905876] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:17.914122] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:18.914349] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:18.922643] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:19.923024] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:19.931035] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:20.931362] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:20.938873] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:21.939140] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:21.946777] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:22.947103] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:22.954739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:23.955088] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:23.962819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:24.963137] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:24.972414] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:25.972717] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:25.980812] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:26.981189] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:26.988973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:27.989256] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:27.997117] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:28.997426] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:29.004739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:30.004999] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:30.012719] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:31.013015] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:31.020139] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:32.020491] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:32.028561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:33.029101] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:33.037028] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:34.037374] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:34.045453] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:35.045784] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:35.053933] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:36.054236] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:36.061917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:37.062206] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:37.070028] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:38.070324] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:38.077424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:39.077725] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:39.085823] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:40.086098] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:40.093717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:41.094037] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:41.101744] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:42.102104] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:42.109607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:43.109884] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:43.118278] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:44.118758] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:44.127323] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:45.127714] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:45.135667] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:46.135938] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:46.144003] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:47.144316] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:47.152567] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:48.152910] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:48.160559] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:49.160865] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:49.168220] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:50.168576] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:50.176672] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:51.177136] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:51.184760] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:52.185061] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:52.193324] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:53.193623] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:53.202150] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:54.202463] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:54.210744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:55.211022] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:55.218707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:56.219021] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:56.227601] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:57.227845] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:57.235678] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:58.235965] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:58.243094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:59.243420] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:59.250945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:00.251267] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:00.258458] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:01.258784] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:01.266619] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:02.266914] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:02.274848] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:03.275166] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:03.282966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:04.283288] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:04.291938] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:05.292249] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:05.300872] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:06.301153] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:06.308932] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:07.309221] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:07.317243] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:08.317582] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:08.325287] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:09.325621] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:09.333382] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:10.333708] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:10.343224] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:11.343576] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:11.351163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:12.351540] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:12.359090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:13.359462] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:13.367226] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:14.367589] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:14.375127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:15.375442] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:15.383148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:16.383522] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:16.391915] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:17.392203] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:17.400672] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:18.401082] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:18.409089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:19.409382] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:19.419086] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:20.419377] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:20.438224] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:21.438510] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:21.446037] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:22.446321] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:22.454103] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:23.454508] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:23.462104] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:24.462402] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:24.469739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:25.470064] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:25.477973] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:26.478292] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:26.486563] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:27.486846] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:27.498470] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:28.498761] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:28.506741] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:29.507058] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:29.517590] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:30.517877] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:30.525893] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:31.526167] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:31.534551] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:32.534974] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:32.542875] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:33.543206] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:33.551464] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:34.551832] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:34.560400] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:35.560721] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:35.568101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:36.568371] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:36.575881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:37.576334] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:37.585518] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:38.585828] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:38.593333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:39.593652] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:39.603118] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:40.603466] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:40.610631] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:41.610902] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:41.618182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:42.618544] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:42.634275] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:43.634578] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:43.650679] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:44.651001] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:44.658684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:45.658972] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:45.666705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:46.667031] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:46.675320] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:47.675681] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:47.683449] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:48.683972] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:48.691812] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:49.692138] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:49.700290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:50.700658] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:50.708040] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:51.708396] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:51.716503] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:52.716810] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:52.729543] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:53.729810] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:53.744744] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:54.745029] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:54.755931] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:55.756257] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:55.764399] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:56.764664] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:56.771707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:57.771998] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:57.780124] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:58.780443] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:58.787824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:59.788121] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:59.795768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:00.796057] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:00.803045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:01.803384] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:01.810737] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:02.811039] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:02.818461] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:03.818714] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:03.828220] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:04.828535] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:04.835896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:05.836244] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:05.844336] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:06.844626] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:06.851941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:07.852264] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:07.859975] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:08.860365] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:08.867821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:09.868242] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:09.875243] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:10.875585] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:10.883195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:11.883554] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:11.890733] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:12.891017] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:12.898627] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:13.898888] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:13.906347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:14.906651] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:14.914032] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:15.914347] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:15.922128] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:16.922456] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:16.929750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:17.930035] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:17.938335] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:18.938612] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:18.945947] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:19.946329] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:19.953742] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:20.954046] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:20.962085] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:21.962511] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:21.970423] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:22.970696] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:22.977784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:23.978035] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:23.985583] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:24.985873] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:24.993039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:25.993366] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:26.000919] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:27.001258] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:27.008933] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:28.009384] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:28.016671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:29.017007] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:29.024598] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:30.024917] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:30.032606] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:31.032894] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:31.040499] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:32.040864] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:32.049669] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:33.050097] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:33.063143] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:34.063362] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:34.071666] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:35.071993] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:35.081082] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:36.081458] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:36.088732] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:37.089010] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:37.096861] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:38.097150] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:38.104661] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:39.104971] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:39.112410] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:40.112727] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:40.120233] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:41.120622] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:41.128379] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:42.128845] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:42.137158] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:43.137586] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:43.145278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:44.145646] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:44.154426] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:45.154758] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:45.162935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:46.163514] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:46.171013] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:47.171286] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:47.179235] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:48.179576] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:48.189020] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:49.189391] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:49.197474] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:50.197872] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:50.205416] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:51.205716] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:51.213764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:52.214088] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:52.221642] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:53.221926] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:53.229652] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:54.229970] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:54.238151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:55.238615] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:55.247151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:56.247705] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:56.256154] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:57.256635] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:57.264678] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:58.264962] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:58.274242] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:59.274586] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:59.282128] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:00.282541] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:00.290188] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:01.290509] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:01.297879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:02.298220] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:02.305989] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:03.306355] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:03.314106] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:04.314369] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:04.321834] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:05.322159] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:05.329774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:06.330093] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:06.338052] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:07.338377] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:07.349639] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:08.349927] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:08.357512] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:09.357855] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:09.365594] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:10.365913] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:10.373451] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:11.373779] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:11.381960] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:12.382332] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:12.390406] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:13.390731] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:13.398055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:14.398470] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:14.405745] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:15.406048] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:15.413653] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:16.413995] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:16.421614] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:17.421910] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:17.430559] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:18.430982] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:18.438822] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:19.439254] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:19.446692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:20.446993] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:20.455057] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:21.455372] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:21.464451] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:22.464705] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:22.472261] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:23.472731] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:23.479697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:24.479980] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:24.487559] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:25.487878] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:25.495520] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:26.495809] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:26.503570] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:27.503854] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:27.512232] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:28.512792] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:28.520498] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:29.520902] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:29.528759] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:30.529073] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:30.537186] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:31.537470] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:31.545358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:32.545659] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:32.553238] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:33.553572] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:33.561330] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:34.561610] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:34.570170] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:35.570499] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:35.578004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:36.578347] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:36.586493] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:37.586800] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:37.594606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:38.594907] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:38.602636] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:39.602932] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:39.610198] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:40.610531] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:40.617787] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:41.618090] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:41.625486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:42.625879] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:42.634233] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:43.634617] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:43.642333] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:44.642662] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:44.650261] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:45.650595] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:45.661215] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:46.661577] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:46.669721] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:47.670041] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:47.693367] end - ✅ in 0.023s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:48.693646] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:48.702203] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:49.702552] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:49.710334] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:50.710670] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:50.718052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:51.718390] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:51.725971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:52.726226] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:52.734190] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:53.734581] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:53.742076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:54.742485] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:54.750736] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:55.751032] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:55.760497] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:56.760993] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:56.769690] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:57.769997] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:57.777862] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:58.778216] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:58.785862] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:59.786158] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:59.793368] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:00.793764] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:00.801163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:01.801444] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:01.808963] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:02.809235] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:02.817605] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:03.817980] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:03.825230] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:04.825561] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:04.833053] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:05.833386] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:05.841762] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:06.842237] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:06.849891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:07.850142] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:07.857791] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:08.858047] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:08.865761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:09.866081] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:09.874810] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:10.875100] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:10.882988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:11.883347] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:11.893913] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:12.894458] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:12.902136] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:13.902492] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:13.909945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:14.910280] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:14.918063] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:15.918404] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:15.926385] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:16.926695] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:16.934355] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:17.934607] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:17.942291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:18.942653] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:18.950222] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:19.950571] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:19.959521] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:20.959819] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:20.967565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:21.967864] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:21.975069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:22.975369] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:22.982803] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:23.983106] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:23.990363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:24.990663] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:24.997882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:25.998230] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:26.006608] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:27.006987] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:27.017829] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:28.018156] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:28.026897] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:29.027282] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:29.034436] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:30.034716] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:30.042147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:31.042481] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:31.049798] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:32.050133] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:32.058576] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:33.058886] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:33.066578] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:34.066875] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:34.075175] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:35.075488] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:35.082739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:36.083017] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:36.090777] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:37.091040] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:37.098166] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:38.098483] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:38.105939] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:39.106223] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:39.113930] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:40.114351] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:40.122794] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:41.123111] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:41.130645] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:42.130920] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:42.138869] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:43.139124] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:43.146294] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:44.146584] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:44.156473] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:45.156818] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:45.165246] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:46.165557] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:46.172936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:47.173278] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:47.181932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:48.182247] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:48.190006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:49.190289] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:49.201353] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:50.201746] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:50.209699] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:51.210025] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:51.217930] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:52.218192] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:52.227007] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:53.227369] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:53.235219] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:54.235507] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:54.245886] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:55.246226] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:55.254352] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:56.254649] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:56.262615] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:57.262934] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:57.270624] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:58.270913] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:58.278375] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:59.278669] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:59.286626] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:00.286990] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:00.295403] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:01.295673] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:01.303751] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:02.304217] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:02.312112] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:03.312423] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:03.320710] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:04.321083] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:04.329399] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:05.329721] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:05.337764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:06.338078] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:06.346501] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:07.346901] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:07.356098] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:08.356411] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:08.364333] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:09.364625] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:09.372523] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:10.372865] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:10.381165] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:11.381469] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:11.389987] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:12.391166] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:12.400379] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:13.400724] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:13.408039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:14.408352] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:14.416587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:15.416887] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:15.425391] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:16.425722] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:16.433705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:17.434012] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:17.442255] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:18.442744] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:18.450749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:19.451074] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:19.460838] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:20.461142] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:20.469121] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:21.469461] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:21.477842] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:22.478132] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:22.486551] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:23.486969] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:23.494831] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:24.495093] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:24.503642] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:25.503933] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:25.512097] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:26.512565] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:26.521147] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:27.521483] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:27.530860] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:28.531375] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:28.540502] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:29.540875] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:29.549099] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:30.549579] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:30.557447] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:31.557741] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:31.566387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:32.566705] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:32.574749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:33.575067] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:33.584209] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:34.584580] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:34.593703] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:35.594016] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:35.604066] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:36.604389] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:36.612268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:37.612616] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:37.620799] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:38.621104] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:38.628574] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:39.628897] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:39.637600] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:40.637962] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:40.646396] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:41.646720] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:41.654574] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:42.654890] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:42.662955] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:43.663242] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:43.671358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:44.671652] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:44.679564] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:45.679903] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:45.689436] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:46.689739] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:46.698982] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:47.699381] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:47.707443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:48.707768] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:48.717547] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:49.717876] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:49.726804] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:50.727109] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:50.735291] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:51.735604] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:51.743518] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:52.743774] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:52.752402] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:53.752705] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:53.760722] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:54.761039] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:54.770319] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:55.770592] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:55.778185] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:56.778531] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:56.787293] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:57.787601] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:57.796283] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:58.796632] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:58.805375] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:59.805696] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:59.815692] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:00.815958] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:00.823993] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:01.824403] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:01.832469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:02.832812] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:02.841645] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:03.841898] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:03.850254] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:04.850790] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:04.858789] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:05.859047] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:05.867092] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:06.867433] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:06.879624] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:07.880062] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:07.888250] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:08.888604] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:08.896407] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:09.896752] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:09.904618] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:10.904935] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:10.914546] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:11.914845] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:11.927188] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:12.927502] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:12.935006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:13.935331] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:13.943426] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:14.943735] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:14.951940] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:15.952268] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:15.960237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:16.960560] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:16.973008] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:17.973321] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:17.981451] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:18.981796] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:18.990067] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:19.990396] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:20.005341] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:21.005761] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:21.013906] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:22.014183] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:22.021988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:23.022345] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:23.039794] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:24.040154] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:24.048111] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:25.048409] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:25.056907] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:26.057291] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:26.064995] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:27.065328] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:27.073794] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:28.074102] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:28.081840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:29.082188] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:29.089793] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:30.090137] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:30.097882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:31.098192] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:31.105833] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:32.106143] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:32.114472] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:33.114846] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:33.123696] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:34.123996] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:34.133034] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:35.133341] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:35.141255] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:36.141637] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:36.149733] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:37.150054] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:37.158936] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:38.159260] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:38.167038] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:39.167382] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:39.175513] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:40.175810] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:40.183107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:41.183435] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:41.191176] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:42.191594] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:42.201078] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:43.201359] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:43.209536] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:44.209855] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:44.218270] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:45.218702] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:45.230643] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:46.230929] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:46.238607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:47.238913] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:47.250975] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:48.251287] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:48.258951] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:49.259351] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:49.267483] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:50.267921] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:50.275872] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:51.276173] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:51.284158] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:52.284497] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:52.293178] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:53.293643] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:53.302265] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:54.302565] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:54.313538] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:55.313851] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:55.321606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:56.321905] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:56.330508] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:57.330787] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:57.339492] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:58.339831] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:58.348374] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:59.348684] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:59.356606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:00.356955] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:00.366121] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:01.366447] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:01.374206] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:02.374584] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:02.382140] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:03.382481] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:03.391004] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:04.391457] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:04.399185] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:05.399490] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:05.408432] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:06.408821] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:06.416990] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:07.417339] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:07.425749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:08.426245] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:08.435600] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:09.435936] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:09.444034] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:10.444471] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:10.452860] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:11.453179] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:11.461647] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:12.462045] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:12.470656] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:13.471010] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:13.479922] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:14.480379] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:14.488536] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:15.488870] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:15.504232] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:16.504674] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:16.512906] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:17.513220] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:17.521267] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:18.521674] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:18.529897] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:19.530174] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:19.538177] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:20.538473] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:20.546408] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:21.546756] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:21.554723] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:22.555026] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:22.567042] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:23.567354] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:23.627738] end - ✅ in 0.060s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:24.628085] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:24.637156] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:25.637478] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:25.645619] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:26.645919] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:26.653395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:27.653709] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:27.661602] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:28.661953] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:28.727773] end - ✅ in 0.066s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:29.728040] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:29.735953] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:30.736242] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:30.744372] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:31.744673] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:31.827466] end - ✅ in 0.083s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:32.827972] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:32.928022] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:33.928354] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:33.936803] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:34.937182] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:34.945421] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:35.945736] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:36.028141] end - ✅ in 0.082s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:37.028662] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:37.037204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:38.037495] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:38.046285] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:39.046614] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:39.055109] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:40.055714] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:40.063819] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:41.064130] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:41.072839] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:42.073255] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:42.085032] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:43.085549] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:43.093682] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:44.093968] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:44.102742] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:45.102995] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:45.110997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:46.111338] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:46.119422] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:47.119836] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:47.127851] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:48.128117] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:48.140049] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:49.140353] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:49.148871] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:50.149361] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:50.157822] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:51.158235] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:51.166954] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:52.167352] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:52.176166] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:53.176500] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:53.184956] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:54.185249] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:54.194497] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:55.194753] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:55.202519] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:56.202769] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:56.210634] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:57.210903] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:57.218436] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:58.218744] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:58.226974] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:59.227482] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:59.236026] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:00.236432] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:00.244627] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:01.245110] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:01.252725] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:02.252993] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:02.260618] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:03.260944] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:03.269948] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:04.270273] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:04.278139] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:05.278418] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:05.287071] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:06.287354] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:06.295424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:07.295838] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:07.304043] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:08.304383] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:08.312365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:09.312735] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:09.320630] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:10.320966] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:10.329019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:11.329371] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:11.337623] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:12.337935] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:12.346173] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:13.346686] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:13.354793] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:14.355095] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:14.363818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:15.364075] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:15.371919] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:16.372171] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:16.380251] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:17.380593] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:17.389464] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:18.389950] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:18.398856] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:19.399241] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:19.408069] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:20.408362] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:20.416289] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:21.416600] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:21.424263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:22.424565] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:22.433845] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:23.434283] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:23.455778] end - ✅ in 0.021s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:24.456190] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:24.464369] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:25.464720] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:25.476656] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:26.476918] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:26.485084] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:27.485611] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:27.493114] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:28.493380] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:28.501771] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:29.502048] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:29.510841] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:30.511100] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:30.518619] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:31.518904] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:31.527840] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:32.528115] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:32.535880] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:33.536160] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:33.544453] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:34.544728] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:34.553985] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:35.554390] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:35.562552] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:36.562863] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:36.570703] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:37.570975] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:37.579152] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:38.579731] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:38.588226] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:39.588776] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:39.600489] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:40.600793] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:40.612912] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:41.613385] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:41.621597] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:42.621954] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:42.629757] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:43.630042] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:43.638094] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:44.638548] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:44.647399] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:45.647729] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:45.655494] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:46.655820] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:46.663074] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:47.663371] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:47.671091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:48.671383] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:48.679263] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:49.679560] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:49.687359] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:50.687652] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:50.695068] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:51.695370] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:51.702962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:52.703244] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:52.711460] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:53.711746] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:53.719579] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:54.719852] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:54.728596] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:55.729002] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:55.736726] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:56.737073] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:56.745964] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:57.746259] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:57.753955] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:58.754252] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:58.761561] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:59.761841] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:59.770421] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:00.770796] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:00.778758] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:01.779180] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:01.786501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:02.786758] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:02.794223] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:03.794521] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:03.802891] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:04.803355] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:04.811154] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:05.811521] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:05.819073] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:06.819633] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:06.827634] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:07.828047] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:07.835950] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:08.836384] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:08.844263] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:09.844615] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:09.853539] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:10.853819] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:10.861666] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:11.861999] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:11.870023] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:12.870501] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:12.877944] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:13.878245] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:13.885581] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:14.885880] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:14.893422] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:15.893881] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:15.901856] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:16.902245] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:16.909784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:17.910249] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:17.918791] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:18.919210] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:18.926436] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:19.926679] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:19.934454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:20.934728] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:20.942356] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:21.942701] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:21.950482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:22.950728] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:22.958539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:23.958816] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:23.967108] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:24.967609] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:24.975542] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:25.975886] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:25.984098] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:26.984459] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:26.992260] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:27.992790] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:28.000736] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:29.001241] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:29.008944] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:30.009489] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:30.020422] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:31.020714] start - args=(, 'stop-feature-test', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:31.028326] end - ✅ in 0.007s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:20:31.028427] end - ❌ 900.643s: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_stop_feature] [2026-04-24T20:20:31.028578] end - ❌ 900.700s: Missing true conditions: {'Ready', 'WorkloadsReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:05:47Z', 'severity': 'Info', 'status': 'True', 'type': 'HTTPRoutesReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'InferencePoolReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:38Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'status': 'True', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:16Z', 'severity': 'Info', 'status': 'True', 'type': 'SchedulerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:05:47Z', 'message': 'Deployment does not have minimum availability.', 'reason': 'MinimumReplicasUnavailable', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_autoscaling_keda_lws[router-managed-workload-llmd-simulator-lws-scaling-keda] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-lws', 'scaling-keda'], prompt='KServe is a', service_na... {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_keda [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-lws", [e2e-llm-inference-service] "scaling-keda", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-keda-lws", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_multi_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_keda_lws(test_case: TestCase): [e2e-llm-inference-service] """KEDA + LWS: VA and ScaledObject exist; pods scale under load.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:525: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-lws', 'scaling-keda'], prompt='KServe is a', service_na... {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...s-aut-1696d0b7'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:05:45.327519', start_time = 1777061145.3281455 [e2e-llm-inference-service] duration = 900.0011365413666, timestamp_end = '2026-04-24T20:20:45.329282' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...tor-lws-aut-1696d0b7'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cef6aafc0> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-keda-l-78828c4a in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-keda-l-78828c4a [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-keda-l-78828c4a [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-lws-aut-1696d0b7 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-lws-aut-1696d0b7 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-lws-aut-1696d0b7 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-keda-autoscale-keda-lws-1337f511 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-keda-autoscale-keda-lws-1337f511 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-keda-autoscale-keda-lws-1337f511 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_keda_lws] [2026-04-24T20:05:45.227843] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-lws', 'scaling-keda'], prompt='KServe is a', service_name='autoscale-keda-lws', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-l-78828c4a'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-1696d0b7'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:05:45.240050] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-l-78828c4a'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-1696d0b7'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:05:45.327259] end - ✅ in 0.087s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:05:45.327519] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-l-78828c4a'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-1696d0b7'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:45.328231] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:45.334494] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:46.334890] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:46.341785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:47.342087] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:47.420999] end - ✅ in 0.079s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:48.421377] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:48.428933] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:49.429456] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:49.435958] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:50.436222] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:50.443083] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:51.443356] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:51.450284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:52.450717] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:52.457783] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:53.458101] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:53.464636] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:54.464980] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:54.471467] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:55.471706] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:55.478649] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:56.478878] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:56.485852] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:57.486272] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:57.492963] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:58.493361] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:58.499479] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:05:59.499723] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:05:59.526756] end - ✅ in 0.027s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:00.527001] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:00.533531] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:01.533956] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:01.541726] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:02.542051] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:02.549690] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:03.550120] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:03.557933] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:04.558385] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:04.566823] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:05.567151] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:05.575262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:06.575627] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:06.583207] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:07.583477] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:07.627439] end - ✅ in 0.044s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:08.627730] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:08.634921] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:09.635209] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:09.642816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:10.643093] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:10.650488] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:11.650770] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:11.659515] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:12.659806] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:12.727672] end - ✅ in 0.068s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:13.727980] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:13.734995] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:14.735286] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:14.827740] end - ✅ in 0.092s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:15.828137] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:15.835190] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:16.835547] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:16.843374] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:17.843674] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:17.851228] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:18.851544] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:18.858543] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:19.858841] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:19.866379] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:20.866692] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:20.874068] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:21.874541] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:21.882244] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:22.882557] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:22.927213] end - ✅ in 0.044s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:23.927510] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:23.934886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:24.935142] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:24.942524] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:25.942803] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:25.950676] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:26.950957] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:26.958671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:27.958942] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:27.966239] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:28.966679] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:28.973811] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:29.974224] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:29.981520] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:30.981840] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:30.989464] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:31.989827] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:31.997856] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:32.998182] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:33.005732] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:34.006137] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:34.013188] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:35.013494] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:35.020984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:36.021397] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:36.030352] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:37.030841] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:37.038441] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:38.038745] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:38.046874] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:39.047258] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:39.054922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:40.055372] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:40.062807] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:41.063113] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:41.070897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:42.071238] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:42.078350] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:43.078739] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:43.086216] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:44.086733] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:44.095188] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:45.095510] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:45.103020] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:46.103352] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:46.113920] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:47.114214] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:47.122375] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:48.122744] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:48.130712] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:49.130958] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:49.138433] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:50.138851] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:50.145975] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:51.146420] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:51.153988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:52.154390] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:52.162460] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:53.162746] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:53.170438] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:54.170928] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:54.178357] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:55.178626] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:55.186122] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:56.186443] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:56.194182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:57.194493] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:57.201984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:58.202323] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:58.209997] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:06:59.210274] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:06:59.217452] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:00.217973] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:00.225697] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:01.226068] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:01.233802] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:02.234316] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:02.242074] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:03.242384] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:03.251491] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:04.251807] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:04.259281] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:05.259631] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:05.266685] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:06.267142] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:06.274179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:07.274509] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:07.281972] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:08.282349] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:08.289508] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:09.289799] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:09.296939] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:10.297259] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:10.304888] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:11.305267] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:11.312785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:12.313079] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:12.320582] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:13.320914] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:13.331173] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:14.331524] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:14.339156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:15.339503] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:15.346740] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:16.347049] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:16.354733] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:17.355023] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:17.362419] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:18.362711] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:18.370041] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:19.370401] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:19.378282] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:20.378594] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:20.386347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:21.386630] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:21.394485] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:22.394777] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:22.403371] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:23.403712] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:23.411538] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:24.411849] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:24.419728] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:25.420102] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:25.427994] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:26.428272] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:26.435190] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:27.435561] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:27.442973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:28.443287] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:28.450711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:29.451060] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:29.458888] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:30.459166] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:30.466970] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:31.467227] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:31.474606] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:32.474884] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:32.482943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:33.483365] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:33.491339] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:34.491589] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:34.499151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:35.499464] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:35.507126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:36.507472] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:36.515368] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:37.515687] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:37.524184] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:38.524634] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:38.531778] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:39.532089] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:39.539410] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:40.539715] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:40.547641] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:41.547941] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:41.555077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:42.555573] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:42.562720] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:43.563042] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:43.570709] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:44.571138] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:44.581095] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:45.581397] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:45.588889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:46.589430] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:46.597408] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:47.597856] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:47.605180] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:48.605614] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:48.612607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:49.612931] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:49.620643] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:50.621048] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:50.628933] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:51.629371] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:51.637197] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:52.637652] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:52.645835] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:53.646148] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:53.653193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:54.653513] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:54.660841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:55.661109] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:55.668789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:56.669054] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:56.676066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:57.676506] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:57.683958] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:58.684236] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:58.691642] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:07:59.692055] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:07:59.699463] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:00.699876] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:00.706898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:01.707190] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:01.715136] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:02.715405] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:02.722573] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:03.722855] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:03.730173] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:04.730546] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:04.738716] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:05.738999] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:05.746732] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:06.747020] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:06.754613] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:07.754865] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:07.762635] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:08.762886] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:08.769998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:09.770292] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:09.777376] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:10.777831] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:10.785055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:11.786399] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:11.793217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:12.793585] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:12.800710] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:13.801117] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:13.808623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:14.808961] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:14.816596] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:15.817034] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:15.825009] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:16.825367] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:16.832941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:17.833357] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:17.841072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:18.841463] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:18.849280] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:19.849606] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:19.856946] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:20.857251] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:20.864941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:21.865332] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:21.875326] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:22.875600] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:22.885360] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:23.885867] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:23.893541] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:24.893874] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:24.900765] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:25.901037] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:25.908576] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:26.908878] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:26.916349] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:27.916638] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:27.924081] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:28.924363] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:28.931967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:29.932279] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:29.939758] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:30.940053] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:30.947520] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:31.947840] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:31.954879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:32.955197] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:32.963108] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:33.963484] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:33.970746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:34.971118] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:34.978139] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:35.978465] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:35.986066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:36.986359] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:36.994084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:37.994472] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:38.002106] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:39.002663] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:39.010078] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:40.010352] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:40.018059] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:41.018356] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:41.025984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:42.026283] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:42.033977] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:43.034377] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:43.041621] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:44.041905] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:44.049100] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:45.049380] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:45.056934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:46.057279] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:46.064786] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:47.065072] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:47.073276] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:48.073597] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:48.081147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:49.081553] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:49.088491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:50.088787] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:50.096113] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:51.096544] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:51.105070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:52.105479] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:52.113102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:53.113514] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:53.120684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:54.120958] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:54.128776] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:55.129088] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:55.136664] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:56.136978] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:56.144072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:57.144390] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:57.151278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:58.151732] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:58.159404] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:08:59.159688] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:08:59.167177] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:00.167472] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:00.180027] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:01.180355] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:01.188269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:02.188612] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:02.197692] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:03.198046] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:03.206092] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:04.206548] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:04.214039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:05.214488] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:05.222181] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:06.222631] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:06.229997] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:07.230352] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:07.238020] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:08.238276] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:08.245056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:09.245337] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:09.252890] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:10.253179] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:10.261680] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:11.262000] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:11.269260] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:12.269622] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:12.276899] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:13.277186] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:13.284356] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:14.284734] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:14.292461] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:15.292725] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:15.299876] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:16.300152] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:16.307048] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:17.307375] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:17.314572] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:18.314857] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:18.321992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:19.322269] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:19.329937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:20.330247] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:20.337115] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:21.337588] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:21.345019] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:22.345407] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:22.352023] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:23.352318] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:23.359127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:24.359348] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:24.366964] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:25.367253] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:25.375021] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:26.375317] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:26.382333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:27.382613] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:27.389683] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:28.389990] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:28.397512] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:29.397845] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:29.404667] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:30.404959] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:30.412686] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:31.413018] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:31.420594] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:32.420954] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:32.428545] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:33.428808] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:33.436078] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:34.436396] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:34.443313] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:35.443593] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:35.451650] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:36.451933] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:36.459101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:37.459378] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:37.466927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:38.467275] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:38.473955] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:39.474254] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:39.481768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:40.482110] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:40.488987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:41.489283] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:41.497025] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:42.497322] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:42.504591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:43.504881] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:43.512223] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:44.512601] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:44.519230] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:45.519615] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:45.526917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:46.527210] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:46.534258] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:47.534660] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:47.542412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:48.542705] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:48.549666] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:49.549923] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:49.564143] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:50.564511] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:50.571861] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:51.572111] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:51.579126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:52.579473] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:52.586580] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:53.586891] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:53.593837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:54.594100] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:54.603778] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:55.604117] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:55.611901] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:56.612254] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:56.619650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:57.619930] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:57.627237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:58.627585] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:58.634426] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:09:59.634744] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:09:59.649687] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:00.649983] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:00.657424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:01.657737] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:01.665099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:02.665355] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:02.675058] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:03.675354] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:03.682747] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:04.683064] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:04.689871] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:05.690151] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:05.697407] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:06.697683] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:06.704894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:07.705270] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:07.712557] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:08.712863] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:08.720544] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:09.720828] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:09.728002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:10.728268] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:10.735882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:11.736257] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:11.744631] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:12.744924] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:12.752183] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:13.752508] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:13.759993] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:14.760454] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:14.768495] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:15.768813] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:15.776227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:16.776645] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:16.786370] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:17.786645] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:17.794325] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:18.794672] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:18.802197] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:19.802514] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:19.810160] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:20.810531] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:20.819890] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:21.820174] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:21.827800] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:22.828125] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:22.835967] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:23.836243] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:23.844065] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:24.844340] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:24.851675] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:25.851998] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:25.859462] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:26.859878] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:26.867251] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:27.867566] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:27.875556] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:28.875856] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:28.882937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:29.883263] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:29.890035] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:30.891253] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:30.898992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:31.899330] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:31.906676] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:32.906934] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:32.921402] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:33.921779] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:33.928782] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:34.929198] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:34.937031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:35.937482] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:35.945397] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:36.945855] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:36.953351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:37.953709] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:37.962078] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:38.962353] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:38.969984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:39.970357] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:39.977814] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:40.978169] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:40.985456] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:41.985744] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:41.993225] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:42.993576] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:43.001257] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:44.001644] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:44.009119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:45.009624] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:45.017289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:46.017687] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:46.025605] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:47.025881] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:47.033583] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:48.033847] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:48.041229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:49.041604] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:49.048803] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:50.049092] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:50.056149] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:51.056526] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:51.063903] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:52.064169] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:52.071699] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:53.072125] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:53.079987] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:54.080666] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:54.089109] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:55.089391] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:55.106044] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:56.106441] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:56.114059] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:57.114398] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:57.122384] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:58.122669] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:58.129606] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:10:59.129898] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:10:59.137648] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:00.137950] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:00.145287] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:01.145749] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:01.153565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:02.153938] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:02.162682] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:03.162986] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:03.170874] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:04.171184] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:04.178331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:05.178609] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:05.186818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:06.187062] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:06.194256] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:07.194609] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:07.202796] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:08.203205] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:08.210888] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:09.211137] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:09.218367] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:10.218654] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:10.226727] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:11.227159] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:11.234482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:12.234787] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:12.242032] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:13.242328] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:13.250429] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:14.250744] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:14.258538] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:15.258792] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:15.266453] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:16.266722] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:16.274050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:17.274544] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:17.281801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:18.282124] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:18.289472] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:19.289838] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:19.297354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:20.297644] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:20.306293] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:21.306628] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:21.313886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:22.314155] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:22.322045] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:23.322378] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:23.330105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:24.330454] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:24.339080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:25.339472] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:25.348094] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:26.348520] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:26.357420] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:27.357731] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:27.366025] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:28.366384] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:28.375383] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:29.375674] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:29.383513] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:30.383810] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:30.391043] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:31.391336] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:31.398975] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:32.399348] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:32.406920] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:33.407198] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:33.415007] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:34.415293] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:34.423038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:35.423384] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:35.431744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:36.432079] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:36.440543] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:37.440843] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:37.448182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:38.448464] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:38.457692] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:39.458007] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:39.465838] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:40.466229] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:40.473497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:41.473848] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:41.480804] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:42.481133] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:42.489367] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:43.489722] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:43.499431] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:44.499742] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:44.507291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:45.507684] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:45.515867] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:46.516280] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:46.523105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:47.523406] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:47.532789] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:48.533067] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:48.540820] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:49.541170] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:49.548998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:50.549353] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:50.557115] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:51.557476] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:51.565210] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:52.565660] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:52.573473] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:53.573904] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:53.581831] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:54.582264] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:54.589684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:55.590009] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:55.597217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:56.597530] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:56.604670] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:57.604920] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:57.611933] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:58.612227] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:58.620420] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:11:59.620924] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:11:59.628556] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:00.628818] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:00.636260] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:01.636590] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:01.644845] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:02.645096] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:02.652655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:03.653090] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:03.660696] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:04.661049] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:04.668892] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:05.669232] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:05.676718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:06.677062] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:06.684271] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:07.684586] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:07.692118] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:08.692503] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:08.700242] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:09.700606] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:09.707924] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:10.708257] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:10.717450] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:11.717768] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:11.725428] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:12.725754] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:12.732973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:13.733268] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:13.741022] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:14.741408] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:14.748704] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:15.748958] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:15.756389] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:16.756668] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:16.769514] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:17.769897] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:17.777029] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:18.777359] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:18.784759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:19.785043] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:19.792631] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:20.792916] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:20.799949] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:21.800197] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:21.812064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:22.812550] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:22.819788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:23.820119] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:23.827046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:24.827404] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:24.835471] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:25.835857] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:25.843829] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:26.844545] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:26.852682] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:27.853102] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:27.865215] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:28.865615] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:28.874056] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:29.874360] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:29.881969] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:30.882351] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:30.889518] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:31.889827] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:31.897032] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:32.897327] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:32.904804] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:33.905128] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:33.915099] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:34.915500] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:34.924587] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:35.924920] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:35.932069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:36.932356] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:36.939527] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:37.939813] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:37.949089] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:38.949381] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:38.956265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:39.956585] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:39.964171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:40.964516] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:40.972170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:41.972500] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:41.979803] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:42.980110] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:42.989985] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:43.990322] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:43.998208] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:44.998702] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:45.006150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:46.006651] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:46.014081] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:47.014524] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:47.022556] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:48.022872] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:48.030832] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:49.031119] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:49.038244] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:50.039918] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:50.047144] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:51.047521] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:51.055280] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:52.055593] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:52.068162] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:53.068447] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:53.082955] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:54.083250] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:54.092383] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:55.092659] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:55.101894] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:56.102231] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:56.109735] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:57.110026] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:57.117083] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:58.117382] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:58.125237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:12:59.125648] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:12:59.132883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:00.133271] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:00.140888] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:01.141178] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:01.149134] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:02.149593] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:02.157043] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:03.157344] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:03.165040] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:04.165352] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:04.172681] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:05.172986] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:05.181135] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:06.181415] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:06.188412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:07.188738] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:07.196715] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:08.197063] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:08.205080] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:09.205609] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:09.212814] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:10.213128] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:10.220915] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:11.221292] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:11.228811] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:12.229123] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:12.237235] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:13.237855] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:13.245623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:14.245962] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:14.253680] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:15.253968] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:15.260916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:16.261243] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:16.268494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:17.268780] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:17.276044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:18.276365] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:18.283234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:19.283552] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:19.290630] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:20.290917] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:20.297847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:21.298134] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:21.305187] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:22.305480] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:22.313375] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:23.313662] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:23.321409] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:24.321798] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:24.329608] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:25.329922] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:25.338226] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:26.338642] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:26.346593] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:27.347009] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:27.356984] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:28.357337] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:28.364556] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:29.364976] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:29.372655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:30.373043] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:30.380889] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:31.381259] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:31.388098] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:32.388389] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:32.396898] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:33.397286] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:33.406157] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:34.406507] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:34.414967] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:35.415343] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:35.427021] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:36.427523] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:36.434577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:37.434926] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:37.442481] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:38.442785] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:38.450396] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:39.450913] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:39.458972] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:40.459504] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:40.467381] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:41.467795] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:41.474919] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:42.475210] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:42.482659] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:43.483017] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:43.490430] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:44.490905] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:44.499745] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:45.500108] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:45.508233] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:46.508714] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:46.516817] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:47.517243] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:47.525647] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:48.526102] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:48.533480] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:49.533774] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:49.541574] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:50.541859] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:50.549227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:51.549563] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:51.556916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:52.557370] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:52.565488] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:53.565828] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:53.573783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:54.574189] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:54.582110] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:55.582634] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:55.590778] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:56.591057] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:56.598670] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:57.598969] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:57.606195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:58.606501] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:58.613489] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:13:59.613922] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:13:59.620791] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:00.621121] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:00.628235] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:01.628577] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:01.635941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:02.636347] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:02.643687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:03.644001] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:03.651461] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:04.651818] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:04.659241] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:05.659620] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:05.667541] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:06.667921] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:06.675658] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:07.676038] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:07.683591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:08.683983] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:08.690809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:09.691157] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:09.700028] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:10.700328] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:10.707025] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:11.707345] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:11.714809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:12.715102] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:12.725726] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:13.726115] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:13.733971] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:14.734233] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:14.741871] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:15.742191] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:15.749757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:16.750048] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:16.757070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:17.757362] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:17.764839] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:18.765196] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:18.773087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:19.773660] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:19.781508] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:20.781855] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:20.789913] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:21.790387] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:21.798330] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:22.798757] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:22.805968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:23.806247] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:23.813382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:24.813690] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:24.821450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:25.821935] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:25.829957] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:26.830369] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:26.838467] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:27.838780] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:27.846622] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:28.847074] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:28.854794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:29.855233] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:29.862680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:30.862966] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:30.870706] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:31.871152] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:31.887403] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:32.887737] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:32.895026] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:33.895357] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:33.904682] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:34.905056] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:34.913238] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:35.913683] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:35.921095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:36.921450] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:36.929150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:37.929533] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:37.939059] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:38.939388] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:38.946706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:39.946998] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:39.954682] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:40.955008] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:40.962368] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:41.962668] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:41.970103] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:42.970503] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:42.978137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:43.978506] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:43.986106] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:44.986503] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:44.994153] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:45.994534] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:46.002321] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:47.002647] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:47.010973] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:48.011386] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:48.019049] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:49.019649] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:49.027146] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:50.027538] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:50.034991] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:51.035386] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:51.043051] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:52.043394] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:52.051627] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:53.051893] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:53.059129] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:54.059701] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:54.068824] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:55.069112] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:55.077896] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:56.078221] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:56.086420] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:57.086968] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:57.094846] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:58.095220] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:58.103092] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:14:59.103365] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:14:59.110563] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:00.110915] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:00.130995] end - ✅ in 0.020s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:01.131268] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:01.138635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:02.138978] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:02.146703] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:03.147028] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:03.154653] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:04.154995] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:04.162407] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:05.162739] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:05.170118] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:06.170629] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:06.178057] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:07.178401] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:07.185817] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:08.186125] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:08.193203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:09.194345] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:09.202290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:10.202702] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:10.214528] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:11.214796] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:11.222006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:12.222370] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:12.229934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:13.230251] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:13.240208] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:14.240549] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:14.247984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:15.248345] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:15.256167] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:16.256515] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:16.263213] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:17.263529] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:17.270965] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:18.271336] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:18.278706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:19.279047] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:19.286740] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:20.287144] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:20.294545] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:21.294898] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:21.302279] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:22.302668] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:22.310955] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:23.311356] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:23.318918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:24.319268] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:24.326657] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:25.326938] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:25.335190] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:26.335552] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:26.343381] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:27.343718] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:27.350866] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:28.351141] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:28.358175] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:29.358523] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:29.367073] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:30.367366] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:30.374925] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:31.375268] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:31.383271] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:32.383708] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:32.392275] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:33.392674] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:33.400081] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:34.400445] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:34.408453] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:35.408770] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:35.415991] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:36.416361] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:36.423788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:37.424043] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:37.431802] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:38.432105] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:38.439992] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:39.440400] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:39.448894] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:40.449413] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:40.456893] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:41.457192] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:41.464916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:42.465326] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:42.473260] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:43.473858] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:43.481191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:44.481487] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:44.490193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:45.490517] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:45.497969] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:46.498276] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:46.506933] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:47.507341] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:47.514760] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:48.515089] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:48.524350] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:49.524755] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:49.533054] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:50.533602] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:50.546717] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:51.547013] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:51.555365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:52.555828] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:52.563790] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:53.564086] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:53.571378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:54.571679] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:54.580741] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:55.581140] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:55.589952] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:56.590250] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:56.597829] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:57.598122] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:57.610770] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:58.611078] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:58.618420] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:15:59.618750] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:15:59.628877] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:00.629317] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:00.636942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:01.637343] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:01.645384] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:02.645749] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:02.653386] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:03.653752] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:03.661920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:04.662314] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:04.669669] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:05.670008] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:05.677935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:06.678315] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:06.686200] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:07.686528] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:07.694565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:08.695080] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:08.703419] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:09.703831] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:09.712151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:10.712636] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:10.721096] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:11.721389] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:11.729791] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:12.730228] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:12.745332] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:13.745605] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:13.753925] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:14.754294] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:14.762493] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:15.762746] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:15.770700] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:16.771012] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:16.778970] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:17.779287] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:17.787057] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:18.787682] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:18.795352] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:19.795683] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:19.803550] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:20.803852] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:20.811488] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:21.811770] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:21.818956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:22.819215] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:22.826983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:23.827268] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:23.835116] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:24.835467] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:24.854371] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:25.854777] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:25.862165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:26.862472] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:26.870252] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:27.870542] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:27.878000] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:28.878375] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:28.886224] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:29.886802] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:29.895424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:30.895707] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:30.904525] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:31.904827] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:31.912824] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:32.913211] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:32.921269] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:33.921678] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:33.929152] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:34.929489] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:34.936874] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:35.937203] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:35.945459] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:36.945770] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:36.953776] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:37.954127] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:37.961774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:38.962102] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:38.971866] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:39.972229] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:39.979688] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:40.979957] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:40.988083] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:41.988551] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:41.996657] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:42.996954] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:43.008209] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:44.008647] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:44.017141] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:45.017500] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:45.024937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:46.025227] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:46.032614] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:47.032868] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:47.040414] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:48.040695] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:48.048436] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:49.048747] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:49.056720] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:50.057135] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:50.064284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:51.064593] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:51.075996] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:52.076361] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:52.084773] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:53.085135] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:53.093257] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:54.093747] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:54.102971] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:55.103386] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:55.111365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:56.111753] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:56.119423] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:57.119821] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:57.128010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:58.128429] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:58.136823] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:16:59.137225] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:16:59.145200] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:00.145602] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:00.154053] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:01.154573] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:01.163251] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:02.163736] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:02.174028] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:03.174573] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:03.182713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:04.183011] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:04.190785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:05.191132] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:05.199382] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:06.199826] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:06.207491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:07.207801] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:07.216586] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:08.217046] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:08.225551] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:09.226023] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:09.234476] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:10.234942] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:10.248024] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:11.248358] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:11.256340] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:12.256614] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:12.264474] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:13.264795] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:13.272351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:14.272719] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:14.280364] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:15.280658] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:15.290340] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:16.290610] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:16.297849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:17.298153] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:17.305603] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:18.305865] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:18.313912] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:19.314254] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:19.321538] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:20.321858] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:20.330037] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:21.330348] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:21.337788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:22.338089] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:22.345729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:23.346077] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:23.353799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:24.354105] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:24.361931] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:25.362333] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:25.371061] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:26.371352] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:26.378813] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:27.379123] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:27.386613] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:28.388005] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:28.395637] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:29.395952] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:29.403818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:30.404211] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:30.411790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:31.412113] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:31.419349] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:32.419767] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:32.428078] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:33.428528] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:33.436441] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:34.436727] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:34.444403] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:35.444701] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:35.452934] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:36.453203] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:36.461159] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:37.461485] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:37.469355] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:38.469731] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:38.477162] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:39.477550] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:39.485530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:40.485843] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:40.493160] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:41.493478] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:41.501193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:42.501530] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:42.509213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:43.509633] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:43.520665] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:44.521018] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:44.529661] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:45.530019] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:45.538289] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:46.538806] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:46.546370] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:47.546727] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:47.555232] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:48.555759] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:48.563790] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:49.564069] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:49.572055] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:50.572381] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:50.582172] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:51.582569] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:51.591104] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:52.591501] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:52.600389] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:53.600734] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:53.608248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:54.608606] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:54.618073] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:55.618379] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:55.627177] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:56.627595] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:56.635323] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:57.635615] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:57.644196] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:58.644587] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:58.652244] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:17:59.652594] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:17:59.660240] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:00.660656] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:00.683730] end - ✅ in 0.023s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:01.684082] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:01.691847] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:02.692154] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:02.700260] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:03.700782] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:03.709943] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:04.710324] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:04.717878] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:05.718263] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:05.726240] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:06.726556] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:06.734280] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:07.734620] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:07.742523] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:08.742879] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:08.750957] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:09.751393] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:09.759039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:10.759493] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:10.769408] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:11.770035] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:11.778074] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:12.778596] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:12.786583] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:13.786963] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:13.797136] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:14.797581] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:14.805621] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:15.805947] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:15.814041] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:16.814345] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:16.821983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:17.822368] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:17.829718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:18.830016] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:18.837963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:19.838247] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:19.846242] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:20.846516] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:20.858779] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:21.859248] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:21.867241] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:22.867548] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:22.875829] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:23.877589] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:23.886291] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:24.886666] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:24.894888] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:25.895262] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:25.902781] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:26.903085] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:26.910604] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:27.910947] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:27.928672] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:28.928947] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:28.936764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:29.937100] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:29.944879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:30.945208] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:30.953632] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:31.953943] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:31.961460] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:32.961772] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:32.970374] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:33.970882] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:34.027608] end - ✅ in 0.056s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:35.028139] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:35.126699] end - ✅ in 0.098s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:36.127071] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:36.135146] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:37.135461] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:37.229953] end - ✅ in 0.094s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:38.230483] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:38.238029] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:39.238376] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:39.246448] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:40.246747] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:40.254587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:41.254907] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:41.263082] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:42.263627] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:42.271787] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:43.272216] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:43.279914] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:44.280230] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:44.288121] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:45.288614] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:45.296505] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:46.296916] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:46.306017] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:47.306490] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:47.320398] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:48.320873] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:48.329059] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:49.329362] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:49.337611] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:50.338102] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:50.345954] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:51.346228] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:51.354050] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:52.354454] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:52.362805] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:53.363239] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:53.371093] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:54.371530] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:54.378994] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:55.379500] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:55.390262] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:56.390671] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:56.398565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:57.398845] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:57.407001] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:58.407360] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:58.415237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:18:59.415818] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:18:59.423608] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:00.423976] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:00.431873] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:01.432278] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:01.439934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:02.440325] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:02.448193] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:03.448584] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:03.456367] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:04.456685] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:04.464189] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:05.464476] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:05.477260] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:06.477587] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:06.485505] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:07.485951] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:07.494216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:08.494656] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:08.503064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:09.503606] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:09.511869] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:10.512328] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:10.519650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:11.519944] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:11.528196] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:12.528679] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:12.536249] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:13.536803] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:13.544580] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:14.544906] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:14.553380] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:15.553869] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:15.574852] end - ✅ in 0.021s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:16.575162] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:16.582726] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:17.583222] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:17.593569] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:18.593985] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:18.610205] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:19.610591] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:19.618582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:20.618886] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:20.626491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:21.626797] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:21.635422] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:22.635682] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:22.643540] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:23.643876] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:23.652087] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:24.652472] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:24.659796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:25.660120] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:25.667993] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:26.668520] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:26.680949] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:27.681343] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:27.689783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:28.690087] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:28.698256] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:29.698767] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:29.707471] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:30.707924] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:30.716439] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:31.716917] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:31.725338] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:32.725767] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:32.734358] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:33.734749] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:33.743067] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:34.743414] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:34.752848] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:35.753352] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:35.762774] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:36.763215] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:36.771247] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:37.771861] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:37.779699] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:38.780015] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:38.788518] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:39.788814] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:39.799634] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:40.800153] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:40.809957] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:41.810246] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:41.817853] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:42.818143] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:42.825858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:43.826150] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:43.833830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:44.834260] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:44.842227] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:45.842703] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:45.850426] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:46.850847] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:46.857973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:47.859350] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:47.866978] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:48.867362] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:48.874930] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:49.875424] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:49.882952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:50.883399] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:50.891282] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:51.891643] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:51.899529] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:52.899786] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:52.907173] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:53.907662] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:53.915126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:54.915635] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:54.923816] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:55.924123] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:55.932263] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:56.932794] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:56.940456] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:57.940910] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:57.949083] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:58.949577] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:58.957664] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:19:59.958108] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:19:59.965932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:00.966223] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:00.973562] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:01.973877] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:01.981405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:02.981709] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:02.988927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:03.989216] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:03.997956] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:04.998426] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:05.006127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:06.006566] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:06.014101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:07.014533] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:07.021770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:08.022083] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:08.029876] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:09.030286] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:09.037655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:10.037924] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:10.045748] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:11.046100] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:11.053684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:12.053988] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:12.061830] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:13.062113] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:13.069870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:14.070151] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:14.077712] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:15.078019] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:15.085544] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:16.085872] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:16.093509] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:17.093925] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:17.101079] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:18.101363] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:18.109322] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:19.109606] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:19.117927] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:20.118409] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:20.126271] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:21.126638] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:21.134932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:22.135404] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:22.143605] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:23.143894] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:23.151803] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:24.152227] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:24.160454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:25.160985] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:25.168888] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:26.169361] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:26.176585] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:27.176882] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:27.184602] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:28.184941] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:28.192883] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:29.193137] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:29.201217] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:30.201602] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:30.209095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:31.209379] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:31.216913] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:32.217350] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:32.224648] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:33.225119] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:33.233257] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:34.233895] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:34.241459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:35.241797] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:35.249211] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:36.249618] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:36.257588] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:37.257958] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:37.265701] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:38.266238] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:38.275069] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:39.275581] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:39.282611] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:40.282936] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:40.290155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:41.290674] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:41.298188] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:42.298590] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:42.305904] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:43.306175] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:43.313773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:44.314193] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:44.321859] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:45.322232] start - args=(, 'autoscale-keda-lws', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:45.329147] end - ✅ in 0.007s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:20:45.329282] end - ❌ 900.001s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:20:45.329682] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-keda-lws', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-keda-l-78828c4a'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-lws-aut-1696d0b7'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-keda-lws-1337f511'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T20:20:45.351342] end - ✅ in 0.021s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_keda_lws] [2026-04-24T20:20:45.351431] end - ❌ 900.123s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'LWS is progressing', 'reason': 'Progressing', 'severity': 'Info', 'status': 'False', 'type': 'WorkerWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:06:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-keda-lws-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_autoscaling_cleanup_hpa[router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', ser... {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_hpa [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-hpa", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-cleanup-hpa", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_cleanup_hpa(test_case: TestCase): [e2e-llm-inference-service] """Removing scaling config should delete VA and HPA.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:673: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', ser... {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...-repl-5d16e76d'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:20:45.592852', start_time = 1777062045.5931423 [e2e-llm-inference-service] duration = 900.7120158672333, timestamp_end = '2026-04-24T20:35:46.305167' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...tor-no-repl-5d16e76d'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cef6aa8e0> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-cleanu-e5a6b97f in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-cleanu-e5a6b97f [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-cleanu-e5a6b97f [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-5d16e76d in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-5d16e76d [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-5d16e76d [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-hpa-autoscale-cleanup-h-aa1ae037 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-hpa-autoscale-cleanup-h-aa1ae037 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-hpa-autoscale-cleanup-h-aa1ae037 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_cleanup_hpa] [2026-04-24T20:20:45.527510] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', service_name='autoscale-cleanup-hpa', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-e5a6b97f'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-5d16e76d'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:20:45.539743] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-e5a6b97f'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-5d16e76d'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:20:45.592719] end - ✅ in 0.053s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:20:45.592852] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-e5a6b97f'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-5d16e76d'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:45.593160] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:45.598800] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:46.599014] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:46.608628] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:47.608921] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:47.616632] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:48.616999] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:48.623875] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:49.624280] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:49.630921] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:50.631206] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:50.638714] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:51.639010] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:51.646546] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:52.646801] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:52.654477] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:53.654821] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:53.662027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:54.662343] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:54.669448] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:55.669765] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:55.677101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:56.677428] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:56.727716] end - ✅ in 0.050s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:57.728133] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:57.735541] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:58.735820] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:58.743050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:20:59.743323] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:20:59.750668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:00.751122] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:00.758778] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:01.759016] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:01.766090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:02.766473] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:02.773776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:03.774046] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:03.780991] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:04.781269] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:04.788440] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:05.788869] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:05.796668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:06.796965] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:06.805424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:07.805810] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:07.824754] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:08.825210] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:08.848780] end - ✅ in 0.023s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:09.849077] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:09.856860] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:10.857126] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:10.865587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:11.865997] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:11.891362] end - ✅ in 0.025s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:12.891636] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:12.899235] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:13.899523] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:13.911838] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:14.912104] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:14.919858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:15.920122] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:15.927669] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:16.927976] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:16.934790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:17.935048] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:17.942844] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:18.943126] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:18.950769] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:19.951006] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:19.961563] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:20.961875] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:20.969455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:21.969735] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:21.977058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:22.977328] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:22.984262] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:23.984592] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:23.991931] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:24.992362] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:24.999667] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:25.999949] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:26.049862] end - ✅ in 0.050s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:27.050119] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:27.251798] end - ✅ in 0.201s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:28.252111] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:28.259577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:29.259890] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:29.347640] end - ✅ in 0.088s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:30.347942] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:30.447180] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:31.447487] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:31.454654] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:32.454963] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:32.462370] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:33.462622] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:33.547433] end - ✅ in 0.085s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:34.547716] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:34.556950] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:35.557384] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:35.565232] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:36.565733] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:36.572751] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:37.573020] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:37.580047] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:38.580337] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:38.590634] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:39.590904] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:39.597883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:40.598127] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:40.605322] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:41.605636] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:41.613069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:42.613360] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:42.621129] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:43.621410] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:43.628582] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:44.628889] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:44.636943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:45.637362] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:45.646660] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:46.646929] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:46.657249] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:47.657610] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:47.665285] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:48.665669] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:48.682007] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:49.682468] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:49.689643] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:50.689970] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:50.697569] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:51.697896] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:51.706539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:52.706826] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:52.714255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:53.714749] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:53.747057] end - ✅ in 0.032s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:54.747350] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:54.754901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:55.755141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:55.762778] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:56.763229] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:56.770840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:57.771082] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:57.778554] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:58.778834] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:58.785711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:59.786154] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:59.794351] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:00.794624] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:00.802365] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:01.802635] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:01.810286] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:02.810621] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:02.818076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:03.818584] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:03.826372] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:04.826761] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:04.834695] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:05.834932] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:05.841960] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:06.842276] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:06.850205] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:07.850695] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:07.858033] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:08.858399] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:08.865443] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:09.865763] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:09.875027] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:10.875355] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:10.883007] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:11.883291] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:11.891422] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:12.891870] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:12.899501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:13.899943] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:13.907831] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:14.908345] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:14.916372] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:15.916652] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:15.924932] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:16.925366] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:16.932862] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:17.933141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:17.940584] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:18.940824] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:18.948124] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:19.948422] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:19.955282] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:20.955559] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:20.962743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:21.963007] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:21.970156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:22.970457] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:22.977372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:23.977687] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:23.985179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:24.985589] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:24.992982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:25.993355] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:26.001017] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:27.001277] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:27.008247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:28.008522] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:28.018097] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:29.018390] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:29.025501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:30.025832] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:30.033213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:31.033480] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:31.040788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:32.041098] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:32.048053] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:33.048341] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:33.064733] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:34.065011] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:34.072220] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:35.072561] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:35.079671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:36.080015] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:36.087287] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:37.087650] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:37.094528] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:38.094799] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:38.102732] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:39.103116] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:39.110769] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:40.111126] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:40.118816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:41.119207] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:41.126651] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:42.126926] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:42.136509] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:43.137111] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:43.144982] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:44.145413] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:44.153047] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:45.153480] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:45.161193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:46.161612] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:46.168880] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:47.169192] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:47.176847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:48.177180] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:48.184684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:49.184949] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:49.192198] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:50.192687] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:50.200939] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:51.201351] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:51.209030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:52.209402] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:52.219020] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:53.219281] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:53.227752] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:54.228228] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:54.239842] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:55.240230] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:55.247814] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:56.248240] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:56.255278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:57.255553] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:57.263532] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:58.263825] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:58.270987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:59.271428] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:59.278879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:00.279369] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:00.286926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:01.287339] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:01.294213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:02.294522] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:02.302045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:03.302537] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:03.310605] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:04.310917] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:04.318499] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:05.318801] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:05.326273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:06.326593] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:06.333848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:07.334138] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:07.342099] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:08.342388] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:08.356146] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:09.356459] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:09.363253] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:10.363591] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:10.371239] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:11.371529] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:11.378696] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:12.378958] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:12.386319] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:13.386557] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:13.393504] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:14.393791] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:14.401102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:15.401598] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:15.408932] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:16.409231] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:16.416399] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:17.416643] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:17.423647] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:18.423908] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:18.431246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:19.431730] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:19.439254] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:20.439715] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:20.447561] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:21.448073] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:21.455513] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:22.455824] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:22.462956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:23.463282] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:23.472824] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:24.473226] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:24.480707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:25.481147] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:25.488279] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:26.488585] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:26.496027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:27.496562] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:27.503784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:28.504109] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:28.511725] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:29.512131] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:29.519080] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:30.519618] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:30.527289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:31.527738] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:31.535191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:32.535751] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:32.543266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:33.543749] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:33.550948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:34.551328] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:34.559853] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:35.560115] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:35.567283] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:36.567787] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:36.575424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:37.575881] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:37.583564] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:38.584012] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:38.591041] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:39.591379] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:39.599084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:40.599423] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:40.607155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:41.607503] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:41.614956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:42.615398] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:42.622998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:43.623497] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:43.630648] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:44.630916] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:44.638089] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:45.638578] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:45.646037] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:46.646519] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:46.654017] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:47.654331] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:47.665247] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:48.665680] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:48.672807] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:49.673055] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:49.680381] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:50.680833] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:50.688361] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:51.688806] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:51.696206] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:52.696656] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:52.704205] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:53.704710] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:53.712052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:54.712379] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:54.720360] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:55.720833] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:55.727904] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:56.728252] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:56.736190] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:57.736748] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:57.745402] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:58.745769] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:58.752954] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:59.753281] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:59.760354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:00.760688] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:00.768044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:01.768528] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:01.776227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:02.776721] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:02.783857] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:03.784109] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:03.791001] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:04.791266] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:04.798472] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:05.798759] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:05.806496] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:06.806914] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:06.814930] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:07.815344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:07.822957] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:08.823369] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:08.830910] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:09.831385] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:09.839281] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:10.839565] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:10.847354] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:11.847645] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:11.855650] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:12.855934] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:12.864036] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:13.864527] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:13.872012] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:14.872510] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:14.880209] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:15.880520] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:15.887849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:16.888145] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:16.896493] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:17.896776] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:17.904897] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:18.905199] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:18.912912] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:19.913192] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:19.920183] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:20.920472] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:20.927640] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:21.928038] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:21.935165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:22.935451] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:22.942927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:23.943397] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:23.950598] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:24.950937] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:24.957633] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:25.957949] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:25.965092] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:26.965550] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:26.973223] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:27.973711] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:27.981223] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:28.981691] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:28.988617] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:29.988898] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:29.998413] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:30.998799] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:31.010488] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:32.010970] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:32.018178] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:33.018627] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:33.034436] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:34.034908] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:34.042496] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:35.042803] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:35.049883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:36.050163] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:36.057071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:37.057337] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:37.064145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:38.064486] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:38.073560] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:39.073879] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:39.081433] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:40.081741] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:40.088707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:41.089042] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:41.096194] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:42.096601] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:42.103749] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:43.104046] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:43.111204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:44.111720] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:44.120149] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:45.120656] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:45.127967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:46.128344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:46.135573] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:47.135838] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:47.143028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:48.143336] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:48.150816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:49.151126] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:49.158423] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:50.158678] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:50.165686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:51.166135] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:51.173601] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:52.174030] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:52.181756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:53.182173] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:53.189565] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:54.189867] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:54.197103] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:55.197369] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:55.204013] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:56.204356] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:56.211572] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:57.211841] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:57.218881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:58.219149] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:58.226252] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:59.226693] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:59.234015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:00.234436] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:00.241203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:01.241660] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:01.248754] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:02.249059] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:02.256413] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:03.256737] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:03.264166] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:04.264591] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:04.272393] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:05.272864] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:05.279793] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:06.280227] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:06.287998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:07.288516] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:07.295746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:08.296146] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:08.303265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:09.303589] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:09.317234] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:10.317732] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:10.325469] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:11.325977] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:11.333428] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:12.333792] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:12.340650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:13.340971] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:13.348624] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:14.348939] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:14.356396] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:15.356807] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:15.364445] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:16.364886] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:16.372409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:17.372839] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:17.380609] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:18.381035] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:18.388125] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:19.388476] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:19.396224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:20.396515] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:20.404179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:21.404481] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:21.411571] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:22.411897] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:22.419934] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:23.420362] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:23.427623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:24.427869] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:24.435950] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:25.436347] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:25.443999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:26.444319] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:26.451242] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:27.451555] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:27.458431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:28.458707] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:28.466974] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:29.467242] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:29.474503] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:30.474775] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:30.481709] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:31.482016] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:31.489272] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:32.489737] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:32.496648] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:33.496939] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:33.505131] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:34.505344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:34.512089] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:35.512378] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:35.519711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:36.519973] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:36.526778] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:37.527052] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:37.534540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:38.534835] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:38.542193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:39.542590] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:39.549917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:40.550321] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:40.557777] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:41.558098] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:41.565700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:42.565990] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:42.574799] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:43.575251] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:43.582221] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:44.582533] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:44.590662] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:45.591044] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:45.598931] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:46.599198] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:46.606137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:47.606444] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:47.613325] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:48.613661] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:48.621048] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:49.621370] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:49.632001] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:50.632351] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:50.639157] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:51.639504] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:51.646813] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:52.647270] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:52.654585] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:53.654874] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:53.661408] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:54.661658] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:54.668934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:55.669249] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:55.676085] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:56.676416] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:56.683763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:57.684228] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:57.691362] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:58.691816] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:58.698720] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:59.699001] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:59.706005] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:00.706287] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:00.712936] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:01.713202] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:01.719866] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:02.720133] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:02.726769] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:03.727071] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:03.733656] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:04.733999] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:04.741283] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:05.741556] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:05.748523] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:06.748759] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:06.756500] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:07.756773] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:07.764877] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:08.765155] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:08.771860] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:09.772128] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:09.780163] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:10.780657] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:10.787527] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:11.787751] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:11.794658] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:12.794911] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:12.802607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:13.802908] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:13.810027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:14.810283] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:14.817114] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:15.817400] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:15.824333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:16.824676] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:16.831747] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:17.832034] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:17.839444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:18.839716] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:18.846559] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:19.846849] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:19.853860] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:20.854155] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:20.862245] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:21.862575] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:21.872879] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:22.873279] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:22.880096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:23.880340] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:23.886895] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:24.887162] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:24.893937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:25.894225] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:25.901052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:26.901340] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:26.908107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:27.908393] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:27.916038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:28.916425] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:28.923442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:29.923758] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:29.931564] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:30.932004] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:30.938746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:31.939033] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:31.946649] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:32.947073] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:32.954174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:33.954445] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:33.961434] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:34.961716] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:34.971977] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:35.972515] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:35.979650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:36.979945] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:36.986924] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:37.987344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:37.994628] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:38.994898] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:39.001865] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:40.002152] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:40.009801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:41.010264] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:41.016878] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:42.017157] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:42.024776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:43.025053] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:43.031965] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:44.032235] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:44.039040] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:45.039275] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:45.045932] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:46.046191] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:46.052800] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:47.053100] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:47.061394] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:48.061660] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:48.068889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:49.069316] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:49.076387] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:50.076841] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:50.083547] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:51.083862] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:51.090870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:52.091192] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:52.098410] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:53.098679] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:53.105782] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:54.106049] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:54.113289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:55.113602] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:55.120717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:56.120963] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:56.127895] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:57.128141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:57.135920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:58.136197] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:58.143070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:59.143396] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:59.150948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:00.151345] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:00.158629] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:01.159012] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:01.166599] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:02.166888] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:02.174145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:03.174668] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:03.181575] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:04.181905] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:04.190191] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:05.190576] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:05.198030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:06.198534] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:06.205864] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:07.206166] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:07.213922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:08.214350] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:08.221178] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:09.221486] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:09.228746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:10.229048] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:10.237258] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:11.237724] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:11.244779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:12.245058] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:12.257108] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:13.257380] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:13.264633] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:14.264927] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:14.273417] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:15.273727] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:15.280601] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:16.281051] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:16.287885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:17.288188] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:17.295250] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:18.295544] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:18.302529] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:19.302787] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:19.310358] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:20.310704] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:20.317820] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:21.318087] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:21.325632] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:22.325913] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:22.332803] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:23.333088] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:23.340199] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:24.340606] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:24.348255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:25.348536] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:25.355293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:26.355590] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:26.362600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:27.362859] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:27.369756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:28.370184] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:28.377381] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:29.377796] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:29.385258] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:30.385569] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:30.393178] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:31.393746] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:31.402101] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:32.402416] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:32.410248] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:33.410621] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:33.418074] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:34.418545] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:34.426916] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:35.427162] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:35.434772] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:36.435028] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:36.442117] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:37.442397] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:37.449805] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:38.450078] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:38.457006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:39.457315] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:39.464405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:40.464799] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:40.471967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:41.472218] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:41.479243] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:42.479520] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:42.487000] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:43.487292] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:43.494743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:44.495177] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:44.503161] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:45.503506] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:45.511224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:46.511525] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:46.518336] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:47.518657] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:47.526384] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:48.526650] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:48.533999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:49.534393] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:49.541548] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:50.541828] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:50.549195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:51.549688] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:51.557027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:52.557485] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:52.565545] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:53.565990] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:53.573716] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:54.574220] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:54.582095] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:55.582591] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:55.589687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:56.590162] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:56.597831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:57.598280] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:57.609720] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:58.610133] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:58.617425] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:59.617816] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:59.625556] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:00.625928] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:00.633660] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:01.634033] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:01.640889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:02.641165] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:02.648879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:03.649292] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:03.656962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:04.657382] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:04.665564] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:05.665910] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:05.672941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:06.673348] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:06.680377] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:07.680810] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:07.687722] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:08.688074] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:08.694937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:09.695254] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:09.702363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:10.702808] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:10.710096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:11.710383] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:11.717686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:12.718141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:12.725429] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:13.725861] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:13.733488] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:14.733826] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:14.741168] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:15.741548] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:15.749875] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:16.750270] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:16.757353] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:17.757693] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:17.764709] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:18.764991] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:18.772467] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:19.772722] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:19.779888] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:20.780182] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:20.787500] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:21.787925] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:21.794705] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:22.794953] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:22.801691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:23.801976] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:23.809273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:24.809598] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:24.816841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:25.817177] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:25.824823] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:26.825213] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:26.832831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:27.833275] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:27.840369] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:28.840758] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:28.848130] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:29.848558] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:29.855527] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:30.855802] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:30.862618] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:31.862897] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:31.870209] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:32.870659] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:32.878404] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:33.878730] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:33.885713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:34.885983] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:34.893196] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:35.893546] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:35.900884] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:36.901187] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:36.911610] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:37.912111] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:37.919627] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:38.919888] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:38.926997] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:39.927382] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:39.937602] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:40.937964] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:40.945190] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:41.945512] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:41.952478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:42.952758] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:42.959876] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:43.960119] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:43.966907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:44.967336] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:44.974066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:45.974360] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:45.983424] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:46.983693] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:46.991179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:47.991496] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:47.998771] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:48.999209] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:49.006809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:50.007177] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:50.014579] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:51.015091] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:51.022775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:52.023087] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:52.030768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:53.031109] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:53.038604] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:54.039067] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:54.046565] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:55.046889] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:55.054341] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:56.054609] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:56.061830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:57.062278] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:57.068943] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:58.069255] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:58.076083] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:59.076456] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:59.084480] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:00.084751] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:00.091770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:01.092143] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:01.099156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:02.099474] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:02.106246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:03.106578] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:03.113686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:04.113976] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:04.121431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:05.121727] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:05.128775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:06.129102] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:06.136112] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:07.136472] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:07.144551] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:08.144829] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:08.152069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:09.152362] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:09.159631] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:10.159893] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:10.166906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:11.167214] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:11.175070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:12.175342] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:12.182113] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:13.182409] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:13.189726] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:14.189986] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:14.197542] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:15.197822] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:15.204729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:16.205038] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:16.211728] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:17.212029] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:17.218967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:18.219263] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:18.226113] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:19.226395] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:19.328493] end - ✅ in 0.102s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:20.328810] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:20.336578] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:21.337078] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:21.344578] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:22.344938] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:22.352015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:23.352337] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:23.359658] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:24.360008] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:24.367473] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:25.367827] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:25.375154] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:26.375515] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:26.382654] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:27.382978] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:27.394245] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:28.394631] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:28.404243] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:29.404668] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:29.411550] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:30.411876] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:30.418929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:31.419234] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:31.426177] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:32.426640] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:32.433449] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:33.433801] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:33.441178] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:34.441580] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:34.448452] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:35.448759] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:35.456041] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:36.456357] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:36.463470] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:37.463805] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:37.473262] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:38.473604] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:38.480516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:39.480841] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:39.488059] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:40.488482] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:40.495650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:41.496035] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:41.503242] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:42.503724] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:42.510700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:43.511055] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:43.518126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:44.518565] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:44.525806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:45.526200] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:45.533391] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:46.533843] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:46.540701] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:47.541031] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:47.548153] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:48.548574] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:48.555734] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:49.556118] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:49.563203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:50.563762] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:50.571092] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:51.571344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:51.578121] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:52.578451] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:52.585354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:53.585683] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:53.592770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:54.593056] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:54.600738] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:55.601044] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:55.607906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:56.608223] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:56.615923] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:57.616375] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:57.623746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:58.624134] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:58.631208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:59.631483] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:59.638656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:00.638939] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:00.646698] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:01.646977] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:01.653985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:02.654293] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:02.661036] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:03.661334] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:03.668812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:04.669137] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:04.676072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:05.676536] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:05.683284] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:06.683653] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:06.690857] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:07.691239] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:07.697714] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:08.698025] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:08.706013] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:09.706481] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:09.715825] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:10.716141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:10.726056] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:11.726369] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:11.733023] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:12.733333] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:12.740125] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:13.740454] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:13.746885] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:14.747158] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:14.754500] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:15.754924] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:15.761902] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:16.762194] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:16.769090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:17.769380] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:17.776754] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:18.777059] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:18.784691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:19.785085] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:19.791759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:20.792041] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:20.799185] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:21.799529] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:21.806746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:22.807112] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:22.814889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:23.815318] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:23.822610] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:24.822903] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:24.830126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:25.830501] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:25.837655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:26.837937] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:26.845033] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:27.845424] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:27.852558] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:28.852833] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:28.859736] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:29.860034] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:29.866597] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:30.866871] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:30.874457] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:31.874924] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:31.882055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:32.882427] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:32.889706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:33.890081] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:33.897227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:34.897773] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:34.905087] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:35.905345] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:35.912219] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:36.912536] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:36.919515] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:37.919811] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:37.927442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:38.927714] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:38.934580] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:39.934908] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:39.942327] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:40.942636] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:40.949641] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:41.949988] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:41.956784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:42.957051] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:42.963723] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:43.963994] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:43.970598] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:44.970837] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:44.977810] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:45.978110] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:45.984740] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:46.984987] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:46.991783] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:47.992064] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:47.999479] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:48.999757] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:49.006888] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:50.007174] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:50.015030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:51.015340] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:51.025145] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:52.025436] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:52.038727] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:53.038994] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:53.046632] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:54.046910] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:54.054341] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:55.054660] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:55.062003] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:56.062359] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:56.069516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:57.069792] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:57.077998] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:58.078345] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:58.084932] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:59.085227] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:59.092051] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:00.092334] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:00.098880] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:01.099115] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:01.106074] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:02.106391] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:02.113248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:03.113590] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:03.122078] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:04.122392] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:04.129969] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:05.130225] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:05.137549] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:06.137816] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:06.144837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:07.145154] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:07.153132] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:08.153496] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:08.161046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:09.161492] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:09.168395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:10.168689] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:10.176572] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:11.176894] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:11.184692] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:12.184956] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:12.191943] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:13.192238] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:13.199736] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:14.200044] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:14.208617] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:15.208958] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:15.216351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:16.216657] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:16.223779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:17.224083] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:17.231165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:18.231434] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:18.238414] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:19.238695] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:19.245724] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:20.246007] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:20.253071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:21.253429] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:21.260405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:22.260754] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:22.267983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:23.268223] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:23.274896] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:24.275201] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:24.282050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:25.282344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:25.289004] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:26.289337] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:26.296749] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:27.297024] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:27.304453] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:28.304862] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:28.311695] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:29.311944] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:29.319181] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:30.319530] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:30.327115] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:31.327356] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:31.334166] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:32.334440] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:32.341075] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:33.341388] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:33.348772] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:34.349010] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:34.359893] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:35.360228] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:35.368402] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:36.368754] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:36.376006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:37.376396] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:37.384422] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:38.384653] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:38.391784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:39.392079] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:39.400710] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:40.400971] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:40.407772] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:41.408072] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:41.415028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:42.415352] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:42.425498] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:43.425802] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:43.433072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:44.433331] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:44.440655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:45.440929] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:45.448446] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:46.448695] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:46.455959] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:47.456260] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:47.463790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:48.464226] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:48.471152] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:49.471524] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:49.478212] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:50.478507] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:50.485468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:51.485771] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:51.492918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:52.493244] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:52.502123] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:53.502574] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:53.509510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:54.509770] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:54.516921] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:55.517169] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:55.524101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:56.524365] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:56.531106] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:57.531441] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:57.538131] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:58.538448] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:58.545465] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:59.545922] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:59.553264] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:00.553645] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:00.560905] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:01.561214] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:01.568721] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:02.569151] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:02.576623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:03.576898] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:03.584191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:04.584459] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:04.591832] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:05.592113] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:05.599294] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:06.599607] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:06.606679] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:07.606950] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:07.614043] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:08.614533] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:08.621525] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:09.621820] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:09.629597] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:10.629910] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:10.637744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:11.638028] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:11.651952] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:12.652355] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:12.659849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:13.660091] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:13.667513] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:14.667808] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:14.674735] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:15.675027] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:15.681852] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:16.682146] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:16.689258] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:17.689592] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:17.698769] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:18.699385] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:18.706724] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:19.707040] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:19.713849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:20.714130] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:20.721226] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:21.721635] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:21.728584] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:22.728852] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:22.736155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:23.736683] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:23.743186] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:24.743526] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:24.750253] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:25.750581] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:25.757387] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:26.757685] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:26.764841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:27.765141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:27.772040] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:28.772418] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:28.779401] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:29.779839] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:29.787138] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:30.787409] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:30.794947] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:31.795363] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:31.803295] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:32.803789] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:32.810830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:33.811123] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:33.818172] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:34.818559] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:34.826031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:35.826520] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:35.834088] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:36.834414] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:36.842619] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:37.842977] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:37.850843] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:38.851225] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:38.858128] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:39.858434] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:39.866030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:40.866327] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:40.873052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:41.873343] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:41.880208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:42.880635] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:42.887552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:43.887821] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:43.894678] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:44.894970] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:44.902745] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:45.903197] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:45.910130] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:46.910466] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:46.918318] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:47.918660] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:47.926448] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:48.926750] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:48.933642] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:49.933903] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:49.940762] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:50.941106] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:50.948269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:51.948611] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:51.956915] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:52.957182] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:52.964595] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:53.964853] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:53.971771] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:54.972072] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:54.983142] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:55.983469] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:55.993990] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:56.994357] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:57.001549] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:58.001948] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:58.009268] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:59.009618] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:59.019676] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:00.020046] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:00.027346] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:01.027656] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:01.034577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:02.034870] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:02.042796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:03.043040] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:03.049991] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:04.050350] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:04.058642] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:05.058985] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:05.065949] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:06.066233] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:06.072776] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:07.073088] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:07.080006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:08.080312] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:08.087637] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:09.087931] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:09.094824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:10.095159] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:10.102210] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:11.102611] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:11.109855] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:12.110137] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:12.117690] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:13.118097] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:13.125685] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:14.126042] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:14.134053] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:15.134322] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:15.141230] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:16.141564] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:16.149945] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:17.150358] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:17.157050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:18.157393] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:18.164811] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:19.165151] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:19.172891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:20.173354] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:20.180650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:21.180931] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:21.188020] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:22.188370] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:22.195782] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:23.196060] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:23.202884] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:24.203141] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:24.209790] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:25.210060] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:25.216889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:26.217202] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:26.224713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:27.225101] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:27.232105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:28.232380] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:28.239038] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:29.239336] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:29.246403] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:30.246662] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:30.253424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:31.253724] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:31.260693] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:32.261011] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:32.268393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:33.268729] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:33.276155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:34.276581] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:34.283382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:35.283690] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:35.290680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:36.290930] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:36.297707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:37.297996] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:37.305482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:38.305850] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:38.317265] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:39.317728] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:39.325455] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:40.325803] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:40.332608] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:41.332914] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:41.339745] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:42.340068] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:42.347014] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:43.347362] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:43.354733] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:44.355133] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:44.362126] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:45.362497] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:45.370720] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:46.371018] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:46.378195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:47.378723] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:47.385785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:48.386206] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:48.393027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:49.393351] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:49.400603] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:50.400877] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:50.407836] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:51.408132] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:51.414844] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:52.415110] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:52.422322] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:53.422638] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:53.429786] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:54.430122] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:54.437917] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:55.438252] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:55.446198] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:56.446517] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:56.453968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:57.454290] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:57.461380] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:58.461723] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:58.468952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:59.469376] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:59.476471] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:00.476722] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:00.484415] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:01.484773] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:01.491540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:02.491831] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:02.499044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:03.499558] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:03.506540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:04.506823] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:04.514246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:05.514742] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:05.522427] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:06.522702] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:06.529691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:07.529947] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:07.537815] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:08.538198] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:08.545205] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:09.545503] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:09.552481] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:10.552961] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:10.560271] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:11.560677] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:11.570156] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:12.570434] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:12.577251] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:13.577579] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:13.585598] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:14.585907] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:14.593858] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:15.594204] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:15.601026] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:16.601310] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:16.608641] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:17.608909] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:17.615992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:18.616269] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:18.627346] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:19.627627] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:19.634816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:20.635097] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:20.642410] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:21.642699] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:21.650582] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:22.651014] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:22.658499] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:23.658931] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:23.666281] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:24.666540] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:24.673446] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:25.673883] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:25.680498] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:26.680824] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:26.688031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:27.688332] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:27.695627] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:28.695920] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:28.702882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:29.703394] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:29.710692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:30.710985] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:30.718665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:31.719099] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:31.725578] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:32.725865] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:32.732848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:33.733148] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:33.739565] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:34.739880] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:34.747215] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:35.747732] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:35.754831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:36.755072] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:36.762046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:37.762374] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:37.769798] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:38.770111] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:38.777658] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:39.778239] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:39.785986] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:40.786493] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:40.793625] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:41.793932] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:41.801035] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:42.801350] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:42.807962] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:43.808255] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:43.815144] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:44.815478] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:44.822726] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:45.823008] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:45.829849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:46.830159] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:46.837442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:47.837942] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:47.845923] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:48.846353] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:48.853387] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:49.853882] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:49.861132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:50.861459] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:50.868945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:51.869266] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:51.877078] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:52.877382] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:52.884915] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:53.885252] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:53.892224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:54.892556] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:54.899449] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:55.899795] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:55.906980] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:56.907276] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:56.915587] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:57.915939] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:57.923648] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:58.923958] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:58.930843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:59.931155] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:59.938161] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:00.938563] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:00.945619] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:01.945897] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:01.952702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:02.952967] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:02.959526] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:03.959873] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:03.970978] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:04.971270] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:04.979684] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:05.980155] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:05.987263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:06.987573] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:06.994879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:07.995182] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:08.002468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:09.002753] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:09.009709] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:10.009986] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:10.017779] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:11.018103] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:11.025718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:12.026025] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:12.033331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:13.033654] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:13.043038] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:14.043548] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:14.051071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:15.051370] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:15.058850] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:16.059163] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:16.066155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:17.066461] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:17.073588] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:18.073906] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:18.081164] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:19.081679] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:19.089215] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:20.089671] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:20.097142] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:21.097505] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:21.104946] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:22.105537] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:22.113070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:23.113368] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:23.120921] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:24.121179] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:24.129151] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:25.129479] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:25.136555] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:26.136879] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:26.143911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:27.144228] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:27.151431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:28.151706] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:28.159199] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:29.159532] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:29.166837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:30.167153] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:30.175003] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:31.175344] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:31.182269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:32.182740] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:32.189843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:33.190159] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:33.198166] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:34.198644] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:34.206096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:35.206408] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:35.213676] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:36.214098] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:36.221895] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:37.222233] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:37.229663] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:38.229990] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:38.237553] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:39.237932] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:39.246176] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:40.246559] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:40.254263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:41.254853] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:41.265273] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:42.265660] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:42.273858] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:43.274157] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:43.281291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:44.281742] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:44.289491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:45.289843] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:45.297348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:46.297670] start - args=(, 'autoscale-cleanup-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:46.304946] end - ✅ in 0.007s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:35:46.305167] end - ❌ 900.712s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:35:46.305703] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-e5a6b97f'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-5d16e76d'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-cleanup-h-aa1ae037'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T20:35:46.326602] end - ✅ in 0.020s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_cleanup_hpa] [2026-04-24T20:35:46.326707] end - ❌ 900.799s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:20:49Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_autoscaling_update_keda[router-managed-workload-llmd-simulator-no-replicas-scaling-keda] _ [e2e-llm-inference-service] [gw1] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_keda [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-keda", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-update-keda", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_update_keda(test_case: TestCase): [e2e-llm-inference-service] """Patching maxReplicas should update the ScaledObject; VA and ScaledObject still exist.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:990: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...-repl-b8014ceb'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:21:39.764081', start_time = 1777062099.764387 [e2e-llm-inference-service] duration = 900.0711331367493, timestamp_end = '2026-04-24T20:36:39.835531' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...tor-no-repl-b8014ceb'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f60bdda6200> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-update-16605242 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-update-16605242 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-update-16605242 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-b8014ceb in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-b8014ceb [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-b8014ceb [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-keda-autoscale-update-k-a3916813 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-keda-autoscale-update-k-a3916813 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-keda-autoscale-update-k-a3916813 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_update_keda] [2026-04-24T20:21:39.696415] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', service_name='autoscale-update-keda', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-update-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-update-16605242'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-b8014ceb'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:21:39.709375] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-update-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-update-16605242'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-b8014ceb'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:21:39.763995] end - ✅ in 0.054s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:21:39.764081] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-update-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-update-16605242'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-b8014ceb'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:39.764405] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:39.769962] end - ✅ in 0.005s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:40.770276] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:40.776942] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:41.777378] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:41.846909] end - ✅ in 0.069s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:42.847199] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:42.853427] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:43.853853] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:43.946693] end - ✅ in 0.093s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:44.946975] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:44.959121] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:45.959524] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:45.967943] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:46.968386] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:46.977084] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:47.977563] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:48.047478] end - ✅ in 0.070s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:49.047727] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:49.055213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:50.055639] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:50.062682] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:51.062935] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:51.070170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:52.070539] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:52.147003] end - ✅ in 0.076s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:53.147377] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:53.154839] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:54.155129] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:54.162203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:55.162642] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:55.169846] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:56.170171] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:56.177265] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:57.177742] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:57.184856] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:58.185188] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:58.197101] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:21:59.197403] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:21:59.205039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:00.205593] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:00.216006] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:01.216465] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:01.224232] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:02.224611] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:02.232165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:03.232491] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:03.240110] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:04.240627] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:04.248739] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:05.248997] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:05.258595] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:06.258896] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:06.266072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:07.266481] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:07.273761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:08.274056] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:08.281274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:09.281802] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:09.289751] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:10.290079] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:10.297433] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:11.297845] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:11.305514] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:12.305973] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:12.313607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:13.314048] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:13.323034] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:14.323490] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:14.331247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:15.331568] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:15.340057] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:16.340531] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:16.348015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:17.348355] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:17.360214] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:18.360562] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:18.368541] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:19.368829] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:19.376749] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:20.377035] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:20.384325] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:21.384627] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:21.391655] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:22.391976] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:22.399818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:23.400135] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:23.407717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:24.408035] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:24.415533] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:25.415870] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:25.424062] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:26.424361] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:26.431970] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:27.432341] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:27.443272] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:28.443653] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:28.454269] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:29.454570] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:29.468394] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:30.468859] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:30.476392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:31.476676] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:31.484155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:32.484605] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:32.492124] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:33.492634] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:33.501261] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:34.501795] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:34.509474] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:35.509918] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:35.517731] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:36.518008] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:36.525271] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:37.525599] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:37.533167] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:38.533532] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:38.541713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:39.542191] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:39.549775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:40.550132] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:40.557346] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:41.557752] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:41.564906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:42.565207] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:42.572603] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:43.572877] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:43.580162] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:44.580516] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:44.589213] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:45.589587] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:45.597012] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:46.597478] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:46.605023] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:47.605471] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:47.612633] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:48.612955] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:48.620692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:49.620963] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:49.628162] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:50.628465] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:50.635754] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:51.636059] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:51.643956] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:52.644238] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:52.651741] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:53.652097] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:53.660459] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:54.660908] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:54.671181] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:55.671470] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:55.678756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:56.679234] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:56.688940] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:57.689341] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:57.696841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:58.697150] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:58.704746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:22:59.705006] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:22:59.712524] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:00.712827] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:00.720047] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:01.720560] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:01.728764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:02.729161] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:02.737070] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:03.737561] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:03.746360] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:04.746840] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:04.754615] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:05.754985] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:05.762915] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:06.763198] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:06.770922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:07.771416] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:07.780203] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:08.780720] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:08.787858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:09.788225] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:09.795054] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:10.795319] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:10.802424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:11.802698] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:11.809750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:12.810054] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:12.817219] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:13.817535] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:13.824717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:14.825050] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:14.832569] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:15.832862] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:15.840758] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:16.841164] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:16.848273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:17.848707] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:17.855880] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:18.856238] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:18.864145] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:19.864495] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:19.872743] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:20.873033] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:20.879755] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:21.880040] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:21.887861] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:22.888269] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:22.896031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:23.896332] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:23.904379] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:24.904716] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:24.911733] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:25.912074] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:25.919051] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:26.919401] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:26.926840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:27.927215] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:27.934731] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:28.935043] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:28.945023] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:29.945347] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:29.952828] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:30.953218] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:30.961578] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:31.961884] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:31.968636] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:32.969093] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:32.977162] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:33.977553] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:33.985194] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:34.985544] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:34.993292] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:35.993811] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:36.001633] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:37.002084] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:37.009711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:38.010076] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:38.016667] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:39.016983] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:39.025071] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:40.025382] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:40.033281] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:41.033825] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:41.041344] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:42.041631] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:42.049212] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:43.049644] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:43.056331] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:44.056655] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:44.064274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:45.064602] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:45.073383] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:46.073774] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:46.081479] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:47.081820] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:47.095275] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:48.095725] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:48.103573] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:49.103977] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:49.111600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:50.112007] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:50.120242] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:51.120697] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:51.127713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:52.128000] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:52.135575] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:53.136038] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:53.143532] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:54.143977] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:54.151268] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:55.151582] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:55.158178] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:56.158516] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:56.165917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:57.166374] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:57.174191] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:58.174782] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:58.182165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:23:59.182662] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:23:59.190272] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:00.190736] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:00.198119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:01.198728] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:01.205610] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:02.205884] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:02.212973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:03.213218] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:03.220808] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:04.221090] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:04.228720] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:05.229083] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:05.237105] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:06.237401] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:06.245102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:07.245378] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:07.252694] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:08.253085] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:08.260847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:09.261292] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:09.268567] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:10.268846] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:10.276037] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:11.276468] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:11.283820] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:12.284207] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:12.290988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:13.291286] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:13.298865] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:14.299168] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:14.307319] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:15.307580] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:15.315151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:16.315516] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:16.322951] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:17.323375] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:17.330484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:18.330767] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:18.337873] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:19.338192] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:19.345805] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:20.346140] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:20.354144] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:21.354470] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:21.361765] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:22.362206] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:22.370146] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:23.370425] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:23.377537] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:24.377952] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:24.385613] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:25.386012] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:25.393887] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:26.394206] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:26.401545] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:27.401830] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:27.409473] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:28.409868] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:28.417111] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:29.417440] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:29.424739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:30.425120] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:30.434500] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:31.434865] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:31.442476] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:32.442898] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:32.451007] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:33.451377] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:33.459170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:34.459491] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:34.466825] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:35.467174] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:35.476266] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:36.476679] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:36.483560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:37.483982] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:37.491393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:38.491705] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:38.498635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:39.498955] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:39.506600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:40.506957] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:40.514734] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:41.515181] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:41.523179] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:42.523621] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:42.531183] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:43.531647] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:43.538810] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:44.539230] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:44.547530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:45.547787] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:45.555056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:46.555351] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:46.562430] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:47.562875] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:47.570097] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:48.570513] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:48.577629] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:49.578009] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:49.585116] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:50.585564] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:50.592985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:51.593353] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:51.600213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:52.600482] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:52.607193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:53.607501] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:53.614843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:54.615095] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:54.621929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:55.622262] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:55.629544] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:56.629905] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:56.638290] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:57.638654] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:57.646243] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:58.646571] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:58.653381] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:24:59.653846] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:24:59.661412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:00.661844] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:00.669116] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:01.669384] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:01.677286] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:02.677730] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:02.684638] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:03.684921] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:03.691878] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:04.692199] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:04.698830] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:05.699150] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:05.706825] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:06.707294] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:06.715187] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:07.715533] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:07.722701] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:08.723124] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:08.731998] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:09.732367] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:09.739977] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:10.740428] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:10.747858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:11.748189] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:11.755376] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:12.755767] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:12.763538] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:13.763885] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:13.771333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:14.771786] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:14.779575] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:15.779940] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:15.787149] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:16.787470] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:16.794626] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:17.795065] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:17.802920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:18.803241] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:18.810560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:19.810847] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:19.817822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:20.818139] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:20.825192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:21.825532] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:21.833448] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:22.833802] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:22.841232] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:23.841642] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:23.848858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:24.849154] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:24.856665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:25.856927] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:25.864075] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:26.864346] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:26.871892] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:27.872183] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:27.878959] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:28.879361] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:28.888694] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:29.889044] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:29.896469] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:30.896917] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:30.904219] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:31.904546] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:31.911828] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:32.912350] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:32.919770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:33.920290] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:33.926656] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:34.927056] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:34.933891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:35.934275] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:35.942535] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:36.942841] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:36.951556] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:37.951870] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:37.958842] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:38.959125] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:38.966096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:39.966430] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:39.973798] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:40.974132] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:40.983471] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:41.983858] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:41.990735] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:42.991029] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:42.998152] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:43.998533] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:44.005528] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:45.005847] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:45.015010] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:46.015394] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:46.022635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:47.023010] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:47.030147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:48.030444] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:48.038235] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:49.038612] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:49.050150] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:50.050414] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:50.057872] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:51.058151] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:51.065622] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:52.065923] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:52.072992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:53.073483] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:53.081983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:54.082268] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:54.089702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:55.089999] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:55.096790] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:56.097059] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:56.103653] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:57.103971] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:57.110697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:58.110969] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:58.117523] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:25:59.117896] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:25:59.124666] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:00.124931] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:00.132273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:01.132649] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:01.139499] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:02.139828] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:02.147679] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:03.148129] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:03.155148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:04.155452] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:04.162402] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:05.162707] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:05.169558] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:06.169832] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:06.176747] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:07.177066] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:07.184579] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:08.184832] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:08.192378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:09.192784] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:09.199821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:10.200132] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:10.207794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:11.208222] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:11.215739] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:12.216019] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:12.222935] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:13.223241] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:13.230618] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:14.230945] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:14.239593] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:15.240049] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:15.248082] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:16.248425] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:16.255521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:17.256062] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:17.263202] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:18.263498] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:18.272861] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:19.273128] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:19.279720] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:20.280028] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:20.287649] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:21.287958] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:21.294973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:22.295480] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:22.302547] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:23.302846] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:23.310495] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:24.310829] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:24.318017] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:25.318365] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:25.325251] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:26.325790] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:26.332718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:27.332990] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:27.340254] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:28.340655] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:28.348012] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:29.348271] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:29.355326] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:30.355620] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:30.362478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:31.362769] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:31.369878] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:32.370188] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:32.377385] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:33.377708] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:33.384838] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:34.385144] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:34.392429] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:35.392757] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:35.402167] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:36.402480] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:36.409926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:37.410353] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:37.424047] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:38.424554] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:38.432173] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:39.432670] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:39.439527] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:40.439819] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:40.447143] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:41.447502] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:41.454232] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:42.454548] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:42.461712] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:43.461971] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:43.468416] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:44.468719] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:44.475837] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:45.476166] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:45.483163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:46.483623] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:46.490464] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:47.490748] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:47.497881] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:48.498349] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:48.505668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:49.505940] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:49.512789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:50.513073] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:50.521794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:51.522229] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:51.528899] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:52.529211] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:52.536687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:53.536931] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:53.543987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:54.544334] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:54.552536] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:55.552828] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:55.559641] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:56.559897] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:56.567291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:57.567575] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:57.574646] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:58.575101] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:58.581603] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:26:59.581872] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:26:59.588756] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:00.589042] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:00.596678] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:01.597212] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:01.604090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:02.604407] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:02.612221] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:03.612560] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:03.619825] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:04.620207] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:04.627945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:05.628350] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:05.635077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:06.635342] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:06.642228] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:07.642557] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:07.649589] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:08.649876] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:08.656795] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:09.657080] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:09.664351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:10.664777] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:10.673821] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:11.674364] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:11.688495] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:12.688921] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:12.699068] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:13.699372] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:13.706885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:14.707229] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:14.714812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:15.715230] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:15.722483] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:16.722762] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:16.729968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:17.730345] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:17.737179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:18.737484] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:18.744371] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:19.744616] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:19.751480] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:20.751746] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:20.758281] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:21.758601] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:21.765382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:22.765727] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:22.772952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:23.773344] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:23.780915] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:24.781349] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:24.788502] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:25.788815] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:25.798360] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:26.798791] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:26.806116] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:27.806383] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:27.813468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:28.813781] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:28.821354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:29.821735] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:29.828489] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:30.828742] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:30.835752] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:31.836100] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:31.842474] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:32.842783] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:32.849592] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:33.849835] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:33.856845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:34.857087] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:34.863834] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:35.864118] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:35.873784] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:36.874048] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:36.881004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:37.881359] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:37.888715] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:38.889114] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:38.895976] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:39.896328] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:39.903399] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:40.903678] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:40.910684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:41.911125] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:41.918155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:42.918447] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:42.925720] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:43.925995] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:43.932889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:44.933337] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:44.940614] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:45.940971] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:45.947714] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:46.948030] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:46.954923] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:47.955346] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:47.962418] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:48.962693] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:48.969807] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:49.970045] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:49.977711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:50.977947] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:50.984641] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:51.984899] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:51.992729] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:52.993198] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:53.000539] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:54.000946] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:54.009089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:55.009608] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:55.017064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:56.017379] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:56.024207] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:57.024596] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:57.032172] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:58.032462] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:58.044289] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:27:59.044743] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:27:59.051268] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:00.051543] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:00.058448] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:01.059252] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:01.065989] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:02.066345] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:02.073637] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:03.074077] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:03.081543] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:04.082044] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:04.090094] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:05.090380] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:05.097877] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:06.098345] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:06.106984] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:07.107241] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:07.114540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:08.114835] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:08.121940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:09.122365] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:09.129780] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:10.130096] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:10.137067] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:11.137384] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:11.144934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:12.145284] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:12.152125] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:13.152466] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:13.160409] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:14.160803] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:14.168556] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:15.168965] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:15.176064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:16.176497] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:16.183806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:17.184162] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:17.191171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:18.191506] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:18.198836] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:19.199221] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:19.206381] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:20.206760] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:20.213846] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:21.214084] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:21.220772] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:22.221129] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:22.230476] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:23.230735] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:23.237706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:24.237989] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:24.246409] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:25.246707] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:25.253182] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:26.253513] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:26.260509] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:27.260940] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:27.268466] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:28.268811] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:28.275250] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:29.275662] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:29.283225] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:30.283758] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:30.290945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:31.291178] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:31.299060] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:32.299348] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:32.306704] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:33.306949] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:33.313868] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:34.314212] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:34.321269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:35.321669] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:35.328548] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:36.328920] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:36.336117] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:37.336468] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:37.343686] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:38.344048] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:38.351205] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:39.351607] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:39.358657] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:40.359173] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:40.370122] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:41.370357] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:41.377254] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:42.377538] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:42.384456] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:43.384876] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:43.391818] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:44.392269] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:44.400702] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:45.401091] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:45.408953] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:46.409344] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:46.416089] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:47.416343] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:47.423150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:48.423522] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:48.431217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:49.431563] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:49.438787] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:50.439045] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:50.445705] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:51.446019] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:51.453165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:52.453565] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:52.460776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:53.461093] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:53.472741] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:54.473041] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:54.479715] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:55.479978] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:55.486904] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:56.487232] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:56.495492] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:57.495792] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:57.502864] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:58.503193] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:58.509852] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:28:59.510133] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:28:59.516522] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:00.516827] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:00.530642] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:01.530915] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:01.537232] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:02.537597] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:02.544916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:03.545271] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:03.552517] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:04.552827] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:04.561276] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:05.561641] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:05.570416] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:06.570725] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:06.577623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:07.577900] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:07.584612] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:08.584912] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:08.592044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:09.592362] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:09.599321] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:10.599602] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:10.606447] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:11.606767] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:11.614242] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:12.614551] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:12.621231] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:13.621571] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:13.628653] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:14.628936] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:14.636419] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:15.636730] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:15.643483] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:16.643757] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:16.650784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:17.651045] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:17.657806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:18.658095] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:18.665019] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:19.665342] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:19.671970] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:20.672309] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:20.679163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:21.679538] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:21.686440] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:22.686888] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:22.693794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:23.694167] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:23.701112] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:24.701496] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:24.708540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:25.708888] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:25.715672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:26.715998] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:26.723443] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:27.723825] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:27.734271] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:28.734713] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:28.742507] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:29.742861] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:29.749994] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:30.750351] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:30.757186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:31.757562] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:31.764188] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:32.764565] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:32.770992] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:33.771337] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:33.778724] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:34.779091] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:34.786291] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:35.786668] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:35.793702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:36.794017] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:36.801564] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:37.801890] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:37.809193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:38.809562] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:38.816658] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:39.816977] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:39.827355] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:40.827697] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:40.835134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:41.835542] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:41.842457] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:42.842763] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:42.849821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:43.850204] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:43.857289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:44.857722] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:44.864906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:45.865257] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:45.874866] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:46.875322] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:46.882486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:47.882838] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:47.889969] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:48.890354] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:48.900424] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:49.900791] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:49.908735] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:50.909166] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:50.916127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:51.916382] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:51.923347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:52.923662] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:52.930438] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:53.930724] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:53.938815] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:54.939092] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:54.945760] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:55.946066] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:55.954788] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:56.955052] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:56.962387] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:57.962798] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:57.969222] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:58.969587] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:58.976340] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:29:59.976597] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:29:59.983195] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:00.983489] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:00.990279] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:01.990607] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:01.997348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:02.997747] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:03.005623] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:04.005928] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:04.012626] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:05.012904] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:05.020615] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:06.020915] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:06.028079] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:07.028386] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:07.035213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:08.035563] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:08.042985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:09.043344] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:09.050880] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:10.051243] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:10.060929] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:11.061251] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:11.076206] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:12.076540] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:12.083730] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:13.084047] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:13.091097] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:14.091455] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:14.099047] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:15.099353] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:15.106394] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:16.106797] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:16.114096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:17.114599] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:17.121971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:18.122252] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:18.129259] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:19.129661] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:19.137991] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:20.138238] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:20.145078] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:21.145359] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:21.152169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:22.152458] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:22.159743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:23.160129] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:23.167462] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:24.167773] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:24.174988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:25.175418] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:25.183265] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:26.183588] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:26.190127] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:27.190410] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:27.197759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:28.198019] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:28.204793] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:29.205129] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:29.212011] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:30.212378] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:30.219598] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:31.219869] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:31.226811] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:32.227367] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:32.233911] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:33.234206] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:33.241635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:34.242083] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:34.249909] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:35.250236] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:35.257511] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:36.257779] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:36.265721] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:37.266090] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:37.273827] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:38.274329] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:38.281816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:39.282098] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:39.288732] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:40.289092] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:40.296908] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:41.297180] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:41.304079] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:42.304343] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:42.310962] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:43.311242] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:43.318208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:44.318491] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:44.325688] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:45.325930] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:45.332794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:46.333111] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:46.339770] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:47.340060] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:47.346871] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:48.347166] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:48.353864] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:49.354150] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:49.361045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:50.361364] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:50.368338] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:51.368622] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:51.378759] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:52.379211] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:52.386552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:53.386888] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:53.393829] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:54.394185] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:54.401203] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:55.401527] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:55.409661] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:56.409924] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:56.416894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:57.417176] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:57.424339] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:58.424797] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:58.432111] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:30:59.432481] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:30:59.439061] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:00.439324] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:00.445925] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:01.446213] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:01.454046] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:02.454527] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:02.461877] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:03.462168] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:03.468826] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:04.469084] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:04.475695] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:05.475935] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:05.482948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:06.483286] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:06.490097] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:07.490389] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:07.497907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:08.498293] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:08.505824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:09.506205] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:09.514905] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:10.515262] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:10.522033] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:11.522451] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:11.529057] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:12.529345] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:12.536119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:13.536353] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:13.543616] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:14.543924] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:14.552687] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:15.552951] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:15.559607] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:16.559868] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:16.567016] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:17.567251] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:17.575413] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:18.575719] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:18.583000] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:19.583317] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:19.590253] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:20.590517] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:20.597153] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:21.597432] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:21.603801] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:22.604084] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:22.611274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:23.611553] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:23.618956] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:24.619240] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:24.626621] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:25.626872] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:25.633810] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:26.634107] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:26.641005] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:27.641514] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:27.649145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:28.649451] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:28.656199] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:29.656658] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:29.663776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:30.664045] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:30.673701] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:31.673975] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:31.680350] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:32.680637] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:32.687375] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:33.687641] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:33.697957] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:34.698257] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:34.709026] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:35.709335] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:35.715841] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:36.716111] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:36.722785] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:37.723045] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:37.730112] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:38.730357] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:38.736984] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:39.737275] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:39.744729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:40.745062] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:40.751760] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:41.752025] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:41.758818] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:42.759099] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:42.766148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:43.766519] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:43.772868] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:44.773166] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:44.780728] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:45.781067] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:45.788105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:46.788353] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:46.794893] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:47.795183] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:47.802755] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:48.803029] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:48.809479] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:49.809736] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:49.816374] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:50.816694] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:50.823602] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:51.823854] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:51.830431] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:52.830783] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:52.838131] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:53.838592] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:53.845715] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:54.846015] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:54.853377] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:55.853845] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:55.860716] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:56.860987] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:56.868006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:57.868462] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:57.875531] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:58.875849] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:58.883044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:31:59.883336] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:31:59.889912] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:00.890215] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:00.898286] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:01.898718] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:01.905944] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:02.906431] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:02.913544] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:03.913925] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:03.921372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:04.921627] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:04.928501] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:05.928863] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:05.935976] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:06.936254] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:06.943671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:07.943989] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:07.951498] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:08.951767] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:08.959090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:09.959387] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:09.966830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:10.967120] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:10.975713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:11.976149] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:11.984986] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:12.985214] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:12.992052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:13.992361] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:13.999701] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:15.000073] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:15.006781] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:16.007054] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:16.014216] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:17.014536] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:17.021093] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:18.021413] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:18.028088] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:19.028344] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:19.035278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:20.035543] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:20.042523] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:21.042819] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:21.049994] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:22.050277] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:22.057283] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:23.057552] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:23.064498] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:24.064752] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:24.072028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:25.072497] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:25.079284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:26.079579] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:26.086085] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:27.086384] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:27.093167] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:28.093515] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:28.100802] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:29.101141] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:29.108250] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:30.108648] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:30.115039] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:31.115357] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:31.122776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:32.123049] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:32.129866] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:33.130209] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:33.137590] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:34.137934] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:34.145143] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:35.145526] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:35.152752] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:36.153080] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:36.160724] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:37.161024] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:37.168682] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:38.169117] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:38.176444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:39.176809] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:39.184610] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:40.184880] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:40.191968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:41.192217] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:41.199064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:42.199389] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:42.206708] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:43.207073] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:43.214397] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:44.214760] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:44.222426] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:45.222709] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:45.230455] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:46.230759] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:46.238191] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:47.238486] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:47.245148] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:48.245432] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:48.252939] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:49.253407] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:49.261696] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:50.261980] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:50.268883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:51.269253] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:51.276828] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:52.277206] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:52.284525] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:53.284900] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:53.292038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:54.292357] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:54.299155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:55.299472] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:55.307851] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:56.308173] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:56.314946] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:57.315191] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:57.321704] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:58.322059] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:58.328711] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:32:59.329000] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:32:59.335859] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:00.336262] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:00.343281] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:01.343549] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:01.350502] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:02.350789] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:02.357517] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:03.357775] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:03.364836] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:04.365107] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:04.372277] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:05.372803] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:05.380600] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:06.380929] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:06.387617] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:07.387907] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:07.394738] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:08.395045] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:08.402639] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:09.402954] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:09.410697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:10.410973] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:10.417567] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:11.417861] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:11.425718] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:12.426074] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:12.432901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:13.433192] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:13.439974] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:14.440217] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:14.448436] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:15.448706] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:15.456293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:16.456713] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:16.464789] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:17.465144] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:17.471779] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:18.472044] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:18.478633] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:19.478943] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:19.485548] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:20.485832] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:20.493276] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:21.493637] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:21.501222] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:22.501553] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:22.508906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:23.509210] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:23.516158] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:24.516542] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:24.523659] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:25.523952] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:25.530633] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:26.530947] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:26.538594] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:27.538894] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:27.547153] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:28.547471] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:28.554155] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:29.554404] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:29.561371] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:30.561694] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:30.568707] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:31.568982] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:31.575587] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:32.575857] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:32.582825] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:33.583121] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:33.590759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:34.591029] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:34.598088] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:35.598371] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:35.605220] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:36.605569] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:36.614186] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:37.614579] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:37.621927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:38.622182] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:38.633048] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:39.633338] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:39.640918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:40.641160] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:40.647708] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:41.647999] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:41.654801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:42.655106] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:42.661665] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:43.661947] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:43.668483] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:44.668787] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:44.676107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:45.676509] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:45.683995] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:46.684526] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:46.691812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:47.692207] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:47.699915] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:48.700337] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:48.706849] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:49.707110] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:49.713952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:50.714243] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:50.721324] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:51.721587] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:51.727875] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:52.728142] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:52.734749] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:53.735068] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:53.741812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:54.742129] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:54.749987] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:55.750238] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:55.757294] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:56.757728] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:56.764389] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:57.764659] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:57.771748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:58.772055] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:58.779014] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:33:59.779438] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:33:59.786227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:00.786536] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:00.793173] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:01.793451] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:01.800193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:02.800641] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:02.806946] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:03.807269] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:03.813678] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:04.813965] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:04.820894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:05.821156] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:05.827840] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:06.828107] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:06.834929] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:07.835212] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:07.842076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:08.842493] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:08.849665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:09.850011] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:09.856700] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:10.857021] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:10.864821] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:11.865144] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:11.875927] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:12.876368] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:12.883942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:13.884332] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:13.890597] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:14.890869] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:14.897796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:15.898065] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:15.905129] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:16.905456] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:16.912512] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:17.912775] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:17.920494] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:18.920812] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:18.931603] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:19.931897] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:19.938851] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:20.939109] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:20.947443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:21.947747] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:21.955752] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:22.956055] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:22.962541] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:23.962805] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:23.970044] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:24.970347] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:24.976964] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:25.977234] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:25.984262] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:26.984605] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:26.991381] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:27.991714] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:28.001769] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:29.002064] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:29.008508] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:30.008822] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:30.015836] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:31.016144] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:31.022890] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:32.023392] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:32.030687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:33.030961] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:33.037893] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:34.038193] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:34.045798] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:35.046101] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:35.052824] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:36.053077] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:36.060280] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:37.060592] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:37.067442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:38.067746] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:38.074023] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:39.074346] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:39.082117] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:40.082744] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:40.089840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:41.090286] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:41.096782] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:42.097092] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:42.103883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:43.104174] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:43.111397] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:44.111653] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:44.119519] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:45.119815] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:45.126510] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:46.126822] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:46.133868] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:47.134167] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:47.140918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:48.141219] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:48.149123] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:49.149541] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:49.156333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:50.156590] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:50.163515] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:51.163792] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:51.170799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:52.171109] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:52.178224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:53.178550] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:53.185851] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:54.186177] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:54.193459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:55.193767] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:55.200240] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:56.200552] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:56.207986] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:57.208375] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:57.216851] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:58.217263] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:58.224052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:34:59.224363] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:34:59.231360] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:00.231606] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:00.238672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:01.238933] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:01.245753] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:02.246084] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:02.252595] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:03.252907] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:03.260435] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:04.260874] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:04.275223] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:05.275593] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:05.283971] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:06.284211] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:06.290638] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:07.290913] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:07.297807] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:08.298095] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:08.304592] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:09.304879] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:09.312010] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:10.312291] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:10.318665] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:11.318961] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:11.325536] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:12.325805] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:12.332661] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:13.332975] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:13.340545] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:14.340811] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:14.348828] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:15.349147] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:15.356502] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:16.356745] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:16.364664] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:17.364910] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:17.388021] end - ✅ in 0.023s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:18.388349] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:18.395466] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:19.395899] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:19.403047] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:20.403361] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:20.410954] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:21.411246] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:21.418611] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:22.419036] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:22.426954] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:23.427253] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:23.434532] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:24.434937] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:24.442935] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:25.443349] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:25.450727] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:26.451059] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:26.458667] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:27.458978] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:27.465822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:28.466169] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:28.472998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:29.473481] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:29.481531] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:30.482034] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:30.489090] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:31.489573] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:31.496517] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:32.497009] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:32.504497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:33.504946] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:33.511883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:34.512262] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:34.520268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:35.520719] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:35.527970] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:36.528232] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:36.535335] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:37.535705] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:37.542455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:38.542759] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:38.549748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:39.550097] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:39.557388] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:40.557664] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:40.564348] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:41.564611] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:41.573079] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:42.573540] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:42.583506] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:43.583772] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:43.591145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:44.591617] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:44.599614] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:45.599953] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:45.607423] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:46.607866] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:46.647685] end - ✅ in 0.040s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:47.647997] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:47.747023] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:48.747369] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:48.846963] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:49.847261] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:49.854292] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:50.854587] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:50.863571] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:51.863887] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:51.874172] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:52.874463] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:52.880932] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:53.881246] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:53.888108] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:54.888647] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:54.894915] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:55.895345] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:55.946986] end - ✅ in 0.051s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:56.947418] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:57.046948] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:58.047479] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:58.055182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:59.055715] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:59.062668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:00.062920] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:00.070763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:01.071022] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:01.077464] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:02.077891] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:02.085176] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:03.085667] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:03.146927] end - ✅ in 0.061s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:04.147219] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:04.154179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:05.154447] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:05.160903] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:06.161173] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:06.168060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:07.168387] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:07.246796] end - ✅ in 0.078s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:08.247112] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:08.255098] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:09.255482] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:09.346935] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:10.347364] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:10.354550] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:11.354814] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:11.362055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:12.362547] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:12.447013] end - ✅ in 0.084s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:13.447292] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:13.546865] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:14.547144] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:14.554283] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:15.554724] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:15.646981] end - ✅ in 0.092s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:16.647265] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:16.654187] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:17.654497] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:17.661164] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:18.661595] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:18.668776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:19.669114] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:19.675937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:20.676292] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:20.683351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:21.683739] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:21.690316] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:22.690580] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:22.697406] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:23.697686] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:23.705853] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:24.706274] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:24.713538] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:25.713772] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:25.725573] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:26.725851] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:26.733055] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:27.733545] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:27.742860] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:28.743133] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:28.750073] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:29.750528] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:29.758117] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:30.758591] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:30.765770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:31.766092] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:31.773023] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:32.773280] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:32.780023] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:33.780286] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:33.787678] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:34.788167] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:34.795504] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:35.795923] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:35.803241] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:36.803541] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:36.810858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:37.811267] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:37.818024] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:38.818508] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:38.826801] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:39.827127] start - args=(, 'autoscale-update-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:39.835140] end - ✅ in 0.008s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:36:39.835531] end - ❌ 900.071s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:36:39.836027] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-update-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-update-16605242'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-b8014ceb'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-update-k-a3916813'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T20:36:39.857114] end - ✅ in 0.020s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_update_keda] [2026-04-24T20:36:39.857278] end - ❌ 900.160s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:21:44Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-update-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_autoscaling_cleanup_keda[router-managed-workload-llmd-simulator-no-replicas-scaling-keda] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_keda [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-keda", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-cleanup-keda", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_cleanup_keda(test_case: TestCase): [e2e-llm-inference-service] """Removing scaling config should delete VA and ScaledObject.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:736: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...o-repl-06d009c2'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:35:46.647178', start_time = 1777062946.6474917 [e2e-llm-inference-service] duration = 900.5080728530884, timestamp_end = '2026-04-24T20:50:47.155565' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...ator-no-repl-06d009c2'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cef6aa0c0> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] > raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] E AssertionError: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:613: AssertionError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-cleanu-53c71c79 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-cleanu-53c71c79 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-cleanu-53c71c79 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-06d009c2 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-06d009c2 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-06d009c2 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-keda-autoscale-cleanup-e034e0d7 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-keda-autoscale-cleanup-e034e0d7 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-keda-autoscale-cleanup-e034e0d7 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_cleanup_keda] [2026-04-24T20:35:46.494233] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', service_name='autoscale-cleanup-keda', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-53c71c79'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-06d009c2'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:35:46.506504] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-53c71c79'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-06d009c2'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:35:46.647039] end - ✅ in 0.140s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:35:46.647178] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-53c71c79'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-06d009c2'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:46.647568] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:46.653016] end - ✅ in 0.005s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:47.653357] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:47.746472] end - ✅ in 0.093s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:48.746815] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:48.846667] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:49.847061] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:49.853588] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:50.853977] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:50.863459] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:51.863707] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:51.872653] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:52.872894] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:52.879218] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:53.879487] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:53.886317] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:54.886543] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:54.892450] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:55.892712] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:55.899415] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:56.899809] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:56.906130] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:57.906573] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:57.913585] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:58.913814] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:58.921350] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:35:59.921619] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:35:59.928400] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:00.928660] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:00.946429] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:01.946810] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:01.954575] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:02.955024] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:02.962889] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:03.963149] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:03.970008] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:04.970239] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:05.046106] end - ✅ in 0.076s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:06.046363] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:06.053130] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:07.053357] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:07.059995] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:08.060345] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:08.147103] end - ✅ in 0.087s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:09.147469] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:09.154270] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:10.154581] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:10.161607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:11.162001] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:11.169160] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:12.169645] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:12.177065] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:13.177563] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:13.184908] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:14.185173] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:14.246469] end - ✅ in 0.061s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:15.246757] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:15.253416] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:16.253704] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:16.260623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:17.260955] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:17.267659] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:18.267946] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:18.274764] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:19.275134] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:19.282155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:20.282460] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:20.289408] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:21.289776] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:21.296524] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:22.296787] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:22.303809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:23.304233] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:23.312259] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:24.312826] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:24.320656] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:25.320943] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:25.331208] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:26.331620] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:26.338757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:27.339029] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:27.345973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:28.346368] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:28.352743] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:29.352990] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:29.359789] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:30.360062] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:30.367234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:31.367708] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:31.374896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:32.375153] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:32.382512] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:33.382801] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:33.389898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:34.390202] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:34.397609] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:35.398089] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:35.405293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:36.405717] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:36.412785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:37.413206] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:37.420505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:38.420916] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:38.428022] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:39.428450] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:39.435248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:40.435629] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:40.449683] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:41.449971] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:41.546815] end - ✅ in 0.097s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:42.547076] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:42.554151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:43.554482] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:43.561293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:44.561798] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:44.646914] end - ✅ in 0.085s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:45.647164] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:45.747019] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:46.747286] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:46.754779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:47.755247] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:47.847619] end - ✅ in 0.092s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:48.847916] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:48.947285] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:49.947699] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:50.047213] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:51.047534] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:51.147187] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:52.147540] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:52.154623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:53.154923] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:53.163373] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:54.163658] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:54.171530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:55.171798] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:55.247119] end - ✅ in 0.075s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:56.247650] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:56.346657] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:57.346919] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:57.353926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:58.354183] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:58.360807] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:36:59.361085] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:36:59.446864] end - ✅ in 0.085s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:00.447133] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:00.546963] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:01.547232] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:01.555981] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:02.556337] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:02.563462] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:03.563890] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:03.570926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:04.571283] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:04.578806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:05.579231] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:05.586459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:06.586716] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:06.593849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:07.594279] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:07.601413] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:08.601713] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:08.608844] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:09.609269] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:09.616694] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:10.616974] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:10.647535] end - ✅ in 0.030s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:11.647895] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:11.655156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:12.655478] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:12.662706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:13.662987] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:13.670156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:14.670474] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:14.677420] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:15.677879] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:15.746817] end - ✅ in 0.069s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:16.747120] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:16.754285] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:17.754778] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:17.846655] end - ✅ in 0.092s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:18.847071] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:19.047509] end - ✅ in 0.200s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:20.047819] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:20.147285] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:21.147614] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:21.247928] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:22.248419] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:22.255860] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:23.256234] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:23.346991] end - ✅ in 0.091s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:24.347515] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:24.354965] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:25.355396] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:25.362859] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:26.363249] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:26.369988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:27.370413] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:27.376880] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:28.377138] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:28.384395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:29.384829] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:29.391897] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:30.392178] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:30.399451] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:31.399878] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:31.407275] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:32.407716] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:32.414942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:33.415403] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:33.422128] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:34.422369] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:34.429442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:35.429712] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:35.437676] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:36.438082] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:36.444989] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:37.445262] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:37.452992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:38.453416] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:38.460691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:39.461129] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:39.468167] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:40.468618] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:40.475730] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:41.476111] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:41.483393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:42.483838] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:42.491096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:43.491521] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:43.498903] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:44.499152] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:44.506454] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:45.506760] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:45.514242] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:46.514519] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:46.521362] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:47.521632] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:47.529145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:48.529452] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:48.536229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:49.536536] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:49.546992] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:50.547459] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:50.555961] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:51.556401] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:51.562826] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:52.563085] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:52.569678] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:53.569942] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:53.576923] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:54.577363] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:54.584642] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:55.584981] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:55.592422] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:56.592880] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:56.599892] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:57.600167] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:57.606980] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:58.607224] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:58.613743] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:37:59.614066] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:37:59.620895] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:00.621246] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:00.630065] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:01.630529] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:01.647877] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:02.648284] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:02.655871] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:03.656353] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:03.663901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:04.664374] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:04.673968] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:05.674429] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:05.681562] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:06.681831] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:06.688470] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:07.688750] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:07.695689] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:08.696195] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:08.703208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:09.703716] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:09.710940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:10.711391] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:10.718615] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:11.718877] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:11.725888] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:12.726170] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:12.733231] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:13.733548] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:13.740556] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:14.740977] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:14.748661] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:15.749086] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:15.756593] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:16.756984] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:16.763988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:17.764372] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:17.771723] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:18.772148] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:18.779636] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:19.779991] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:19.786711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:20.787010] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:20.794601] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:21.794890] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:21.801917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:22.802283] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:22.809227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:23.809692] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:23.816951] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:24.817255] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:24.824826] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:25.825087] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:25.832521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:26.832985] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:26.839987] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:27.840260] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:27.847215] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:28.847701] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:28.854726] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:29.855174] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:29.862390] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:30.862678] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:30.870103] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:31.870414] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:31.877264] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:32.877825] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:32.884118] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:33.884393] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:33.893696] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:34.893985] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:34.902315] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:35.902633] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:35.912065] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:36.912567] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:36.919678] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:37.919935] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:37.927800] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:38.928049] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:38.934890] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:39.935151] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:39.942830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:40.943094] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:40.950039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:41.950342] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:41.956831] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:42.957153] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:42.964864] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:43.965337] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:43.972139] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:44.972614] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:44.979637] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:45.979929] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:45.987085] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:46.987635] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:46.994832] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:47.995124] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:48.002259] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:49.002574] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:49.009554] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:50.009853] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:50.017094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:51.017585] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:51.025209] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:52.025747] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:52.033212] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:53.033553] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:53.040847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:54.041371] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:54.048833] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:55.049156] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:55.056821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:56.057107] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:56.063946] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:57.064192] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:57.073186] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:58.073529] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:58.080518] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:38:59.080798] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:38:59.088071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:00.088368] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:00.096490] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:01.096876] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:01.104262] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:02.104583] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:02.111866] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:03.112328] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:03.119563] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:04.119821] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:04.127439] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:05.127712] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:05.134235] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:06.134550] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:06.141746] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:07.142004] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:07.149370] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:08.149818] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:08.158450] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:09.158899] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:09.166004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:10.166259] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:10.173171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:11.173597] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:11.186098] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:12.186345] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:12.196100] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:13.196349] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:13.203986] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:14.204276] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:14.211812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:15.212263] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:15.219770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:16.220177] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:16.226623] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:17.226889] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:17.233759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:18.234141] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:18.241197] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:19.241740] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:19.248741] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:20.249018] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:20.256204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:21.256711] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:21.264077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:22.264557] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:22.271784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:23.272043] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:23.278926] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:24.279163] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:24.285884] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:25.286266] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:25.293943] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:26.294240] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:26.301001] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:27.301350] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:27.308504] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:28.308871] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:28.315753] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:29.316188] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:29.323139] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:30.323509] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:30.331266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:31.331597] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:31.338502] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:32.338744] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:32.345418] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:33.345749] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:33.353101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:34.353616] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:34.361081] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:35.361582] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:35.369432] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:36.369878] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:36.376872] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:37.377141] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:37.384064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:38.384354] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:38.391697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:39.392098] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:39.399179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:40.399496] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:40.406717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:41.407142] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:41.414950] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:42.415236] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:42.422409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:43.422696] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:43.429491] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:44.429768] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:44.436784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:45.437083] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:45.444735] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:46.444969] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:46.451569] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:47.451837] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:47.459354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:48.459632] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:48.467284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:49.467743] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:49.475127] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:50.475401] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:50.482051] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:51.482337] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:51.489168] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:52.489470] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:52.496029] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:53.496389] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:53.504741] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:54.505114] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:54.512413] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:55.512880] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:55.523265] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:56.523586] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:56.534261] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:57.534555] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:57.541715] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:58.542165] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:58.549439] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:39:59.549699] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:39:59.556130] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:00.556435] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:00.563606] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:01.563897] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:01.571064] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:02.571576] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:02.578906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:03.579366] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:03.586672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:04.586931] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:04.594468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:05.594778] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:05.602208] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:06.602557] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:06.609982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:07.610289] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:07.617423] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:08.617898] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:08.624945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:09.625239] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:09.632896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:10.633260] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:10.640966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:11.641380] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:11.648444] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:12.648720] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:12.655368] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:13.655635] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:13.663228] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:14.663580] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:14.673418] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:15.673708] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:15.680998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:16.681475] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:16.688843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:17.689245] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:17.695995] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:18.696338] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:18.703396] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:19.703787] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:19.710937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:20.711255] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:20.718095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:21.718388] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:21.725074] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:22.725501] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:22.732395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:23.732809] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:23.739488] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:24.739887] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:24.746622] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:25.747047] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:25.754800] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:26.755243] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:26.762132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:27.762492] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:27.769336] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:28.769593] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:28.776429] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:29.776667] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:29.783661] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:30.784010] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:30.790934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:31.791209] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:31.797899] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:32.798165] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:32.804886] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:33.805286] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:33.811844] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:34.812156] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:34.819264] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:35.819604] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:35.833549] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:36.833831] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:36.840811] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:37.841089] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:37.848489] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:38.848777] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:38.855828] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:39.856237] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:39.863776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:40.864222] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:40.873153] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:41.873747] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:41.880803] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:42.881161] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:42.888153] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:43.888569] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:43.895440] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:44.895709] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:44.902600] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:45.902837] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:45.947738] end - ✅ in 0.045s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:46.948023] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:46.955202] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:47.955532] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:47.963061] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:48.963446] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:48.970642] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:49.971121] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:50.046861] end - ✅ in 0.075s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:51.047139] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:51.054793] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:52.055271] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:52.062763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:53.063230] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:53.073732] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:54.074158] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:54.081925] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:55.082360] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:55.103158] end - ✅ in 0.021s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:56.103695] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:56.110825] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:57.111259] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:57.118314] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:58.118600] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:58.125883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:40:59.126343] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:40:59.146845] end - ✅ in 0.020s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:00.147216] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:00.155664] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:01.155976] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:01.163478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:02.163845] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:02.171251] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:03.171768] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:03.178937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:04.179220] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:04.186605] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:05.186899] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:05.246951] end - ✅ in 0.060s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:06.247198] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:06.346723] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:07.347036] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:07.354639] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:08.355071] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:08.362998] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:09.363350] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:09.370907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:10.371355] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:10.446781] end - ✅ in 0.075s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:11.447032] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:11.454005] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:12.454535] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:12.547066] end - ✅ in 0.092s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:13.547326] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:13.647093] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:14.647604] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:14.847459] end - ✅ in 0.200s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:15.847829] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:15.947284] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:16.947786] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:16.954981] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:17.955346] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:17.963046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:18.963287] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:18.971024] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:19.971275] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:19.978653] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:20.979087] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:20.992087] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:21.992350] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:22.006457] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:23.006825] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:23.014532] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:24.014826] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:24.022441] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:25.022690] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:25.030447] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:26.030707] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:26.038006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:27.038427] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:27.046500] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:28.046769] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:28.055178] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:29.055658] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:29.064232] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:30.064729] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:30.072047] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:31.072506] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:31.081861] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:32.082371] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:32.089630] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:33.090109] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:33.098145] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:34.098418] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:34.106425] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:35.106862] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:35.114148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:36.114410] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:36.121552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:37.121855] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:37.129225] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:38.129515] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:38.137413] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:39.137838] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:39.144889] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:40.145290] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:40.152717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:41.153086] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:41.160266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:42.160645] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:42.168372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:43.168698] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:43.175703] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:44.175963] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:44.183464] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:45.183821] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:45.191182] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:46.191461] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:46.198841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:47.199105] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:47.206414] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:48.206836] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:48.214058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:49.214337] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:49.221436] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:50.221712] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:50.229054] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:51.229366] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:51.237216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:52.237505] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:52.245033] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:53.245582] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:53.252791] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:54.253205] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:54.260905] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:55.261323] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:55.268086] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:56.268371] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:56.275759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:57.276155] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:57.283266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:58.283690] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:58.290627] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:41:59.290900] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:41:59.297870] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:00.298140] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:00.304913] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:01.305196] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:01.314455] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:02.314911] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:02.321787] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:03.322085] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:03.329281] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:04.329744] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:04.342045] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:05.342361] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:05.350039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:06.350586] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:06.357633] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:07.358074] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:07.365968] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:08.366480] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:08.373938] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:09.374366] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:09.381177] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:10.381535] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:10.388949] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:11.389386] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:11.397807] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:12.398106] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:12.405772] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:13.406070] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:13.414399] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:14.414713] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:14.422174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:15.422648] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:15.431633] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:16.431998] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:16.439584] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:17.439888] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:17.447716] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:18.448153] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:18.456152] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:19.456459] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:19.463913] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:20.464350] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:20.471506] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:21.471792] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:21.478745] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:22.479036] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:22.486038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:23.486451] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:23.492674] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:24.492936] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:24.500988] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:25.501424] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:25.509410] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:26.509760] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:26.516920] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:27.517355] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:27.524293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:28.524608] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:28.531720] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:29.532193] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:29.539576] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:30.539951] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:30.547069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:31.547380] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:31.554761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:32.555140] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:32.562485] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:33.562913] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:33.570284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:34.570795] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:34.578914] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:35.579348] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:35.587091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:36.587471] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:36.595187] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:37.595573] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:37.604412] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:38.604657] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:38.612071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:39.612524] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:39.620364] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:40.620680] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:40.627511] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:41.627746] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:41.635847] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:42.636144] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:42.646118] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:43.646603] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:43.653913] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:44.654206] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:44.662443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:45.662868] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:45.670116] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:46.670422] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:46.677518] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:47.677979] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:47.685071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:48.685325] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:48.692088] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:49.692385] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:49.699480] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:50.699763] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:50.706999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:51.707407] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:51.714853] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:52.715284] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:52.722591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:53.722861] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:53.729597] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:54.729844] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:54.739069] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:55.739538] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:55.748098] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:56.748579] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:56.756137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:57.756597] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:57.764713] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:58.764955] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:58.771860] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:42:59.772090] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:42:59.779095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:00.779559] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:00.786683] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:01.787106] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:01.794939] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:02.795282] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:02.802770] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:03.803205] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:03.810831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:04.811227] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:04.818384] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:05.818677] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:05.826409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:06.826662] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:06.834077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:07.834561] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:07.842699] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:08.842985] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:08.850629] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:09.850864] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:09.858184] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:10.858691] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:10.867195] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:11.867695] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:11.875372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:12.875792] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:12.882858] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:13.883111] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:13.890230] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:14.890641] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:14.898075] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:15.898489] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:15.905713] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:16.906154] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:16.913840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:17.914269] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:17.922000] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:18.922345] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:18.929269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:19.929744] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:19.937293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:20.937584] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:20.944968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:21.945391] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:21.955275] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:22.955639] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:22.964009] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:23.964500] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:23.971613] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:24.971896] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:24.981568] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:25.982059] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:25.990374] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:26.990595] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:26.997406] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:27.997854] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:28.010264] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:29.010734] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:29.017962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:30.018223] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:30.025968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:31.026242] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:31.033639] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:32.034077] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:32.041263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:33.041552] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:33.050056] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:34.050412] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:34.058204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:35.058550] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:35.069138] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:36.069422] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:36.077005] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:37.077497] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:37.085049] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:38.085346] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:38.094116] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:39.094617] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:39.102686] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:40.103019] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:40.110196] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:41.110692] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:41.117998] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:42.118326] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:42.124944] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:43.125228] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:43.132874] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:44.133139] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:44.140723] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:45.141141] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:45.148323] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:46.148604] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:46.155680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:47.155989] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:47.162988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:48.163335] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:48.170611] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:49.170911] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:49.177750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:50.178035] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:50.185483] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:51.185730] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:51.192885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:52.193122] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:52.200257] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:53.200580] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:53.207884] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:54.208189] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:54.215688] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:55.216023] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:55.223198] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:56.223686] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:56.231059] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:57.231348] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:57.237994] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:58.238338] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:58.245795] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:43:59.246169] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:43:59.253238] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:00.253526] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:00.260513] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:01.260775] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:01.267731] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:02.267997] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:02.275027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:03.275370] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:03.281720] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:04.282030] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:04.291083] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:05.291389] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:05.298706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:06.298990] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:06.306747] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:07.307027] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:07.320098] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:08.320549] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:08.329785] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:09.330060] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:09.337217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:10.337716] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:10.344729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:11.345018] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:11.351941] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:12.352201] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:12.360081] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:13.360349] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:13.367960] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:14.368232] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:14.376387] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:15.376761] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:15.384396] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:16.384657] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:16.391521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:17.391820] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:17.399399] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:18.399662] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:18.406985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:19.407391] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:19.415720] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:20.416045] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:20.423187] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:21.423534] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:21.431145] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:22.431652] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:22.438635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:23.438952] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:23.446738] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:24.447012] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:24.453989] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:25.454326] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:25.461859] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:26.462127] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:26.469246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:27.469701] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:27.477027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:28.477509] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:28.485096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:29.485367] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:29.492702] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:30.493121] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:30.500200] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:31.500506] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:31.507917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:32.508371] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:32.515898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:33.516360] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:33.523179] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:34.523519] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:34.531711] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:35.532087] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:35.538699] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:36.539017] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:36.545983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:37.546270] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:37.553143] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:38.553637] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:38.560899] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:39.561172] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:39.567985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:40.568266] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:40.575494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:41.575919] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:41.583009] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:42.583350] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:42.590510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:43.590961] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:43.598750] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:44.599201] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:44.607216] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:45.607505] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:45.614950] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:46.615353] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:46.627439] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:47.627918] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:47.647527] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:48.647769] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:48.847378] end - ✅ in 0.199s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:49.847687] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:49.948200] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:50.948529] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:51.047732] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:52.048032] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:52.055719] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:53.056006] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:53.062843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:54.063109] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:54.146815] end - ✅ in 0.083s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:55.147127] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:55.154164] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:56.154667] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:56.161906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:57.162211] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:57.169771] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:58.170208] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:58.248404] end - ✅ in 0.078s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:44:59.248670] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:44:59.346958] end - ✅ in 0.098s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:00.347246] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:00.547215] end - ✅ in 0.200s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:01.547521] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:01.554939] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:02.555185] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:02.562162] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:03.562590] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:03.569514] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:04.569796] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:04.577531] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:05.577806] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:05.585124] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:06.585427] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:06.592865] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:07.593115] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:07.600339] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:08.600681] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:08.608493] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:09.608783] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:09.618016] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:10.618290] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:10.625847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:11.626197] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:11.634024] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:12.634287] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:12.641156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:13.641417] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:13.649054] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:14.649461] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:14.656775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:15.657084] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:15.664745] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:16.665051] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:16.674588] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:17.674865] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:17.681785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:18.682116] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:18.689257] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:19.689571] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:19.696605] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:20.696881] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:20.703966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:21.704289] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:21.711990] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:22.712280] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:22.719292] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:23.719571] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:23.726539] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:24.726839] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:24.734137] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:25.734510] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:25.741695] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:26.742021] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:26.750985] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:27.751253] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:27.758520] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:28.758817] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:28.765882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:29.766170] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:29.773766] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:30.774072] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:30.781675] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:31.781952] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:31.789061] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:32.789362] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:32.796107] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:33.796458] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:33.803781] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:34.804075] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:34.811613] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:35.811887] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:35.819148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:36.819491] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:36.826599] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:37.826917] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:37.834213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:38.834503] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:38.841270] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:39.841629] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:39.850904] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:40.851196] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:40.858181] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:41.858510] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:41.865549] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:42.865889] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:42.873005] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:43.873320] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:43.880201] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:44.880532] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:44.887644] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:45.887932] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:45.895022] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:46.895335] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:46.903258] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:47.903538] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:47.911342] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:48.911685] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:48.918982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:49.919336] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:49.926156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:50.926466] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:50.933878] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:51.934174] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:51.942543] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:52.942874] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:52.950016] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:53.950345] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:53.957084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:54.957366] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:54.965451] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:55.965792] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:55.973579] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:56.973856] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:56.980854] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:57.981166] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:57.989054] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:58.989462] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:45:58.996760] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:45:59.997082] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:00.004755] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:01.005080] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:01.014380] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:02.014660] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:02.022475] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:03.022784] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:03.030894] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:04.031203] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:04.039409] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:05.039825] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:05.047818] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:06.048181] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:06.055656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:07.056060] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:07.070788] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:08.071202] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:08.078725] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:09.079060] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:09.086490] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:10.086828] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:10.093806] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:11.094198] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:11.100950] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:12.101219] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:12.109580] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:13.109886] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:13.116911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:14.117224] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:14.125518] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:15.125781] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:15.133456] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:16.133779] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:16.142066] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:17.142486] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:17.152692] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:18.153058] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:18.160718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:19.160997] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:19.168499] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:20.168786] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:20.176704] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:21.177061] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:21.184841] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:22.185237] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:22.193914] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:23.194214] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:23.201229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:24.201615] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:24.209151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:25.209435] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:25.217050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:26.217346] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:26.225760] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:27.226049] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:27.234541] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:28.234821] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:28.241984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:29.242275] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:29.250627] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:30.250998] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:30.258590] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:31.258921] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:31.267065] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:32.267351] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:32.274900] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:33.275228] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:33.282891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:34.283187] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:34.291811] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:35.292046] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:35.302935] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:36.303200] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:36.312894] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:37.313289] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:37.324126] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:38.324512] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:38.332922] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:39.333267] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:39.447527] end - ✅ in 0.114s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:40.447977] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:40.457456] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:41.457740] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:41.464931] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:42.465247] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:42.479438] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:43.479730] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:43.487744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:44.488124] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:44.495605] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:45.495876] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:45.503183] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:46.503538] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:46.511063] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:47.511384] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:47.520019] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:48.520423] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:48.527863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:49.528171] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:49.535944] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:50.536234] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:50.552067] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:51.552367] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:51.559451] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:52.559749] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:52.566999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:53.567330] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:53.576010] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:54.576349] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:54.583043] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:55.583369] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:55.591221] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:56.591563] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:56.599277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:57.599637] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:57.607766] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:58.608076] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:58.615481] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:46:59.615745] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:46:59.624790] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:00.625146] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:00.636821] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:01.637088] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:01.646726] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:02.647016] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:02.664272] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:03.664628] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:03.680712] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:04.681006] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:04.695431] end - ✅ in 0.014s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:05.695755] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:05.705536] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:06.705841] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:06.716158] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:07.716507] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:07.724422] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:08.724751] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:08.732344] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:09.732635] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:09.744464] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:10.744744] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:10.756508] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:11.756779] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:11.763759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:12.764062] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:12.771161] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:13.771525] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:13.780521] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:14.780799] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:14.789089] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:15.789477] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:15.797016] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:16.797280] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:16.805551] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:17.805967] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:17.813382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:18.813673] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:18.821349] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:19.821626] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:19.828516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:20.828838] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:20.835908] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:21.836185] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:21.843085] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:22.843384] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:22.850112] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:23.850349] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:23.857344] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:24.857655] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:24.865021] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:25.865362] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:25.874657] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:26.874920] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:26.881744] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:27.882047] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:27.888894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:28.889173] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:28.896045] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:29.896341] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:29.902919] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:30.903260] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:30.910457] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:31.910764] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:31.917643] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:32.917936] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:32.925654] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:33.926044] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:33.933269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:34.933534] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:34.940801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:35.941068] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:35.948058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:36.948347] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:36.955678] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:37.955958] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:37.963060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:38.963463] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:38.970783] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:39.971100] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:39.978509] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:40.978749] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:40.986382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:41.986693] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:41.993723] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:42.994136] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:43.001394] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:44.001812] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:44.009289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:45.009786] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:45.016890] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:46.017250] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:46.024690] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:47.024976] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:47.037945] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:48.038218] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:48.045072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:49.045356] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:49.052508] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:50.052819] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:50.059481] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:51.059775] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:51.067122] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:52.067362] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:52.075652] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:53.075944] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:53.082734] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:54.083007] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:54.090729] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:55.091066] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:55.098597] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:56.098832] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:56.105826] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:57.106118] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:57.112866] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:58.113161] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:58.120450] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:47:59.120827] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:47:59.129565] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:00.129919] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:00.137656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:01.137992] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:01.145181] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:02.145628] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:02.152692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:03.152954] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:03.160217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:04.160508] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:04.167772] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:05.168096] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:05.175582] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:06.175866] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:06.183125] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:07.183550] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:07.190511] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:08.190856] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:08.198420] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:09.198731] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:09.206392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:10.206809] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:10.214076] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:11.214357] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:11.222026] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:12.222336] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:12.229316] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:13.229602] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:13.237150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:14.237464] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:14.245865] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:15.246150] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:15.254206] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:16.254553] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:16.261471] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:17.261751] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:17.270162] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:18.270493] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:18.277671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:19.278117] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:19.285755] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:20.286220] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:20.294468] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:21.294869] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:21.302223] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:22.302550] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:22.310193] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:23.310545] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:23.317349] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:24.317649] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:24.325039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:25.325381] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:25.446925] end - ✅ in 0.121s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:26.447282] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:26.546878] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:27.547190] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:27.646870] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:28.647198] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:28.748787] end - ✅ in 0.101s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:29.749044] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:29.847062] end - ✅ in 0.098s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:30.847376] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:30.854151] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:31.854486] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:31.861198] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:32.861594] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:32.868962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:33.869418] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:33.875929] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:34.876247] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:34.946618] end - ✅ in 0.070s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:35.946949] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:35.954468] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:36.954710] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:36.961525] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:37.961791] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:38.046910] end - ✅ in 0.085s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:39.047224] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:39.146799] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:40.147143] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:40.153979] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:41.154338] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:41.161611] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:42.161919] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:42.169132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:43.169482] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:43.176622] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:44.176933] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:44.184882] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:45.185242] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:45.192102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:46.192384] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:46.199901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:47.200214] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:47.207904] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:48.208221] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:48.247002] end - ✅ in 0.039s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:49.247400] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:49.254585] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:50.254889] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:50.261943] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:51.262259] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:51.269497] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:52.269831] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:52.276684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:53.276979] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:53.283838] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:54.284118] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:54.292675] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:55.292978] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:55.300687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:56.300952] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:56.307729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:57.307989] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:57.314849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:58.315180] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:58.322819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:59.323108] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:59.329917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:00.330174] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:00.337327] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:01.337599] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:01.344775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:02.345155] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:02.353808] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:03.354263] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:03.361973] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:04.362352] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:04.369931] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:05.370325] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:05.378605] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:06.378881] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:06.386196] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:07.386539] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:07.394048] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:08.394504] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:08.401428] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:09.401709] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:09.408537] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:10.408813] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:10.416058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:11.416379] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:11.424215] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:12.424700] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:12.431733] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:13.432057] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:13.439189] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:14.439537] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:14.446819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:15.447122] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:15.454486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:16.454771] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:16.462210] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:17.462508] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:17.469558] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:18.469854] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:18.477072] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:19.477400] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:19.485189] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:20.485755] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:20.494280] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:21.494588] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:21.501634] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:22.501901] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:22.509383] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:23.509687] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:23.517668] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:24.517967] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:24.525053] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:25.525339] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:25.532518] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:26.532816] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:26.540509] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:27.540849] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:27.548226] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:28.548623] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:28.557453] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:29.557897] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:29.565021] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:30.565331] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:30.572737] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:31.573134] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:31.579955] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:32.580264] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:32.587134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:33.587503] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:33.594911] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:34.595181] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:34.602907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:35.603205] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:35.610224] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:36.610532] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:36.618243] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:37.618823] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:37.625821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:38.626079] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:38.632750] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:39.633035] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:39.640764] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:40.641042] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:40.647901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:41.648141] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:41.655089] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:42.655396] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:42.661968] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:43.662258] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:43.669750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:44.670113] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:44.676691] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:45.676991] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:45.683898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:46.684236] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:46.691074] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:47.691349] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:47.698394] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:48.698667] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:48.706006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:49.706392] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:49.713565] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:50.713839] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:50.720662] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:51.720959] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:51.727906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:52.728224] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:52.735023] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:53.735331] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:53.742007] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:54.742247] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:54.749573] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:55.749827] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:55.757002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:56.757322] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:56.764156] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:57.764485] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:57.771945] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:58.772246] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:58.779986] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:59.780227] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:59.787608] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:00.787980] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:00.794835] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:01.795197] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:01.802372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:02.802621] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:02.809844] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:03.810120] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:03.816989] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:04.817335] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:04.825083] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:05.825570] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:05.833803] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:06.834178] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:06.842126] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:07.842599] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:07.852213] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:08.852549] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:08.859366] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:09.859672] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:09.866675] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:10.866958] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:10.873822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:11.874098] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:11.880805] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:12.881061] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:12.888132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:13.888456] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:13.895107] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:14.895485] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:14.903413] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:15.903798] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:15.911706] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:16.912147] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:16.919803] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:17.920223] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:17.927913] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:18.928339] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:18.935680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:19.935974] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:19.942750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:20.943062] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:20.950764] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:21.951013] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:21.958741] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:22.959014] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:22.966418] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:23.966896] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:23.974237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:24.974711] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:24.981782] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:25.982018] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:25.988791] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:26.989078] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:26.995799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:27.996085] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:28.003719] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:29.004044] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:29.011684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:30.012124] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:30.020067] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:31.020366] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:31.027445] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:32.027690] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:32.034845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:33.035168] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:33.042372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:34.042640] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:34.050214] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:35.050538] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:35.057635] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:36.057915] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:36.065459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:37.065749] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:37.073657] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:38.073948] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:38.080796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:39.081089] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:39.091227] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:40.091744] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:40.099966] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:41.100338] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:41.108512] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:42.108931] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:42.116725] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:43.117019] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:43.125073] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:44.125358] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:44.132552] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:45.132878] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:45.140043] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:46.140437] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:46.147482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:47.147790] start - args=(, 'autoscale-cleanup-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:47.155439] end - ✅ in 0.007s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:50:47.155565] end - ❌ 900.508s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:50:47.155680] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-cleanup-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-cleanu-53c71c79'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-06d009c2'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-cleanup-e034e0d7'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [delete_llmisvc] [2026-04-24T20:50:47.175278] end - ✅ in 0.019s [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_cleanup_keda] [2026-04-24T20:50:47.175420] end - ❌ 900.681s: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:36:01Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-cleanup-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] _ test_llm_autoscaling_stop_hpa[router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] _ [e2e-llm-inference-service] [gw0] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] name = 'autoscale-stop-hpa', namespace = 'kserve-ci-e2e-test' [e2e-llm-inference-service] version = 'v1alpha1' [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def get_llmisvc( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] name, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] version=constants.KSERVE_V1ALPHA1_VERSION, [e2e-llm-inference-service] ): [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return kserve_client.api_instance.get_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICE, [e2e-llm-inference-service] name, [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:476: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceservices' [e2e-llm-inference-service] name = 'autoscale-stop-hpa', kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] [e2e-llm-inference-service] def get_namespaced_custom_object(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-llm-inference-service] """get_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Returns a namespace scoped custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.get_namespaced_custom_object(group, version, namespace, plural, name, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: the custom resource's group (required) [e2e-llm-inference-service] :param str version: the custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param str name: the custom object's name (required) [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: object [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] kwargs['_return_http_data_only'] = True [e2e-llm-inference-service] > return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1632: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceservices' [e2e-llm-inference-service] name = 'autoscale-stop-hpa', kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-llm-inference-service] all_params = ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...] [e2e-llm-inference-service] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'name': 'autoscale-stop-hpa', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceservices', ...} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] def get_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-llm-inference-service] """get_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Returns a namespace scoped custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: the custom resource's group (required) [e2e-llm-inference-service] :param str version: the custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param str name: the custom object's name (required) [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] [e2e-llm-inference-service] local_var_params = locals() [e2e-llm-inference-service] [e2e-llm-inference-service] all_params = [ [e2e-llm-inference-service] 'group', [e2e-llm-inference-service] 'version', [e2e-llm-inference-service] 'namespace', [e2e-llm-inference-service] 'plural', [e2e-llm-inference-service] 'name' [e2e-llm-inference-service] ] [e2e-llm-inference-service] all_params.extend( [e2e-llm-inference-service] [ [e2e-llm-inference-service] 'async_req', [e2e-llm-inference-service] '_return_http_data_only', [e2e-llm-inference-service] '_preload_content', [e2e-llm-inference-service] '_request_timeout' [e2e-llm-inference-service] ] [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-llm-inference-service] if key not in all_params: [e2e-llm-inference-service] raise ApiTypeError( [e2e-llm-inference-service] "Got an unexpected keyword argument '%s'" [e2e-llm-inference-service] " to method get_namespaced_custom_object" % key [e2e-llm-inference-service] ) [e2e-llm-inference-service] local_var_params[key] = val [e2e-llm-inference-service] del local_var_params['kwargs'] [e2e-llm-inference-service] # verify the required parameter 'group' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['group'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `group` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'version' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['version'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `version` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'namespace' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['namespace'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `namespace` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'plural' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['plural'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `plural` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'name' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['name'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `name` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] collection_formats = {} [e2e-llm-inference-service] [e2e-llm-inference-service] path_params = {} [e2e-llm-inference-service] if 'group' in local_var_params: [e2e-llm-inference-service] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-llm-inference-service] if 'version' in local_var_params: [e2e-llm-inference-service] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-llm-inference-service] if 'namespace' in local_var_params: [e2e-llm-inference-service] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-llm-inference-service] if 'plural' in local_var_params: [e2e-llm-inference-service] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-llm-inference-service] if 'name' in local_var_params: [e2e-llm-inference-service] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] header_params = {} [e2e-llm-inference-service] [e2e-llm-inference-service] form_params = [] [e2e-llm-inference-service] local_var_files = {} [e2e-llm-inference-service] [e2e-llm-inference-service] body_params = None [e2e-llm-inference-service] # HTTP header `Accept` [e2e-llm-inference-service] header_params['Accept'] = self.api_client.select_header_accept( [e2e-llm-inference-service] ['application/json']) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] # Authentication setting [e2e-llm-inference-service] auth_settings = ['BearerToken'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] > return self.api_client.call_api( [e2e-llm-inference-service] '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}', 'GET', [e2e-llm-inference-service] path_params, [e2e-llm-inference-service] query_params, [e2e-llm-inference-service] header_params, [e2e-llm-inference-service] body=body_params, [e2e-llm-inference-service] post_params=form_params, [e2e-llm-inference-service] files=local_var_files, [e2e-llm-inference-service] response_type='object', # noqa: E501 [e2e-llm-inference-service] auth_settings=auth_settings, [e2e-llm-inference-service] async_req=local_var_params.get('async_req'), [e2e-llm-inference-service] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-llm-inference-service] _preload_content=local_var_params.get('_preload_content', True), [e2e-llm-inference-service] _request_timeout=local_var_params.get('_request_timeout'), [e2e-llm-inference-service] collection_formats=collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1739: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}' [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'name': 'autoscale-stop-hpa', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceservices', ...} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def call_api(self, resource_path, method, [e2e-llm-inference-service] path_params=None, query_params=None, header_params=None, [e2e-llm-inference-service] body=None, post_params=None, files=None, [e2e-llm-inference-service] response_type=None, auth_settings=None, async_req=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-llm-inference-service] [e2e-llm-inference-service] To make an async_req request, set the async_req parameter. [e2e-llm-inference-service] [e2e-llm-inference-service] :param resource_path: Path to method endpoint. [e2e-llm-inference-service] :param method: Method to call. [e2e-llm-inference-service] :param path_params: Path parameters in the url. [e2e-llm-inference-service] :param query_params: Query parameters in the url. [e2e-llm-inference-service] :param header_params: Header parameters to be [e2e-llm-inference-service] placed in the request header. [e2e-llm-inference-service] :param body: Request body. [e2e-llm-inference-service] :param post_params dict: Request post form parameters, [e2e-llm-inference-service] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-llm-inference-service] :param auth_settings list: Auth Settings names for the request. [e2e-llm-inference-service] :param response: Response data type. [e2e-llm-inference-service] :param files dict: key -> filename, value -> filepath, [e2e-llm-inference-service] for `multipart/form-data`. [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param collection_formats: dict of collection formats for path, query, [e2e-llm-inference-service] header, and post parameters. [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: [e2e-llm-inference-service] If async_req parameter is True, [e2e-llm-inference-service] the request will be called asynchronously. [e2e-llm-inference-service] The method will return the request thread. [e2e-llm-inference-service] If parameter async_req is False or missing, [e2e-llm-inference-service] then the method will return the response directly. [e2e-llm-inference-service] """ [e2e-llm-inference-service] if not async_req: [e2e-llm-inference-service] > return self.__call_api(resource_path, method, [e2e-llm-inference-service] path_params, query_params, header_params, [e2e-llm-inference-service] body, post_params, files, [e2e-llm-inference-service] response_type, auth_settings, [e2e-llm-inference-service] _return_http_data_only, collection_formats, [e2e-llm-inference-service] _preload_content, _request_timeout, _host) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-hpa' [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] path_params = [('group', 'serving.kserve.io'), ('version', 'v1alpha1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'llminferenceservices'), ('name', 'autoscale-stop-hpa')] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def __call_api( [e2e-llm-inference-service] self, resource_path, method, path_params=None, [e2e-llm-inference-service] query_params=None, header_params=None, body=None, post_params=None, [e2e-llm-inference-service] files=None, response_type=None, auth_settings=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] [e2e-llm-inference-service] config = self.configuration [e2e-llm-inference-service] [e2e-llm-inference-service] # header parameters [e2e-llm-inference-service] header_params = header_params or {} [e2e-llm-inference-service] header_params.update(self.default_headers) [e2e-llm-inference-service] if self.cookie: [e2e-llm-inference-service] header_params['Cookie'] = self.cookie [e2e-llm-inference-service] if header_params: [e2e-llm-inference-service] header_params = self.sanitize_for_serialization(header_params) [e2e-llm-inference-service] header_params = dict(self.parameters_to_tuples(header_params, [e2e-llm-inference-service] collection_formats)) [e2e-llm-inference-service] [e2e-llm-inference-service] # path parameters [e2e-llm-inference-service] if path_params: [e2e-llm-inference-service] path_params = self.sanitize_for_serialization(path_params) [e2e-llm-inference-service] path_params = self.parameters_to_tuples(path_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] for k, v in path_params: [e2e-llm-inference-service] # specified safe chars, encode everything [e2e-llm-inference-service] resource_path = resource_path.replace( [e2e-llm-inference-service] '{%s}' % k, [e2e-llm-inference-service] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] # query parameters [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] query_params = self.sanitize_for_serialization(query_params) [e2e-llm-inference-service] query_params = self.parameters_to_tuples(query_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] # post parameters [e2e-llm-inference-service] if post_params or files: [e2e-llm-inference-service] post_params = post_params if post_params else [] [e2e-llm-inference-service] post_params = self.sanitize_for_serialization(post_params) [e2e-llm-inference-service] post_params = self.parameters_to_tuples(post_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] post_params.extend(self.files_parameters(files)) [e2e-llm-inference-service] [e2e-llm-inference-service] # auth setting [e2e-llm-inference-service] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-llm-inference-service] [e2e-llm-inference-service] # body [e2e-llm-inference-service] if body: [e2e-llm-inference-service] body = self.sanitize_for_serialization(body) [e2e-llm-inference-service] [e2e-llm-inference-service] # request url [e2e-llm-inference-service] if _host is None: [e2e-llm-inference-service] url = self.configuration.host + resource_path [e2e-llm-inference-service] else: [e2e-llm-inference-service] # use server/host defined in path or operation instead [e2e-llm-inference-service] url = _host + resource_path [e2e-llm-inference-service] [e2e-llm-inference-service] # perform request and return response [e2e-llm-inference-service] > response_data = self.request( [e2e-llm-inference-service] method, url, query_params=query_params, headers=header_params, [e2e-llm-inference-service] post_params=post_params, body=body, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-hpa' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] post_params=None, body=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Makes the HTTP request using RESTClient.""" [e2e-llm-inference-service] if method == "GET": [e2e-llm-inference-service] > return self.rest_client.GET(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-hpa' [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] query_params = [], _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] > return self.request("GET", url, [e2e-llm-inference-service] headers=headers, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] query_params=query_params) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-hpa' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] body=None, post_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Perform requests. [e2e-llm-inference-service] [e2e-llm-inference-service] :param method: http request method [e2e-llm-inference-service] :param url: http request url [e2e-llm-inference-service] :param query_params: query parameters in the url [e2e-llm-inference-service] :param headers: http request headers [e2e-llm-inference-service] :param body: request json body, for `application/json` [e2e-llm-inference-service] :param post_params: request post parameters, [e2e-llm-inference-service] `application/x-www-form-urlencoded` [e2e-llm-inference-service] and `multipart/form-data` [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] """ [e2e-llm-inference-service] method = method.upper() [e2e-llm-inference-service] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-llm-inference-service] 'PATCH', 'OPTIONS'] [e2e-llm-inference-service] [e2e-llm-inference-service] if post_params and body: [e2e-llm-inference-service] raise ApiValueError( [e2e-llm-inference-service] "body parameter cannot be used with post_params parameter." [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] post_params = post_params or {} [e2e-llm-inference-service] headers = headers or {} [e2e-llm-inference-service] [e2e-llm-inference-service] timeout = None [e2e-llm-inference-service] if _request_timeout: [e2e-llm-inference-service] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-llm-inference-service] timeout = urllib3.Timeout(total=_request_timeout) [e2e-llm-inference-service] elif (isinstance(_request_timeout, tuple) and [e2e-llm-inference-service] len(_request_timeout) == 2): [e2e-llm-inference-service] timeout = urllib3.Timeout( [e2e-llm-inference-service] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-llm-inference-service] [e2e-llm-inference-service] if 'Content-Type' not in headers: [e2e-llm-inference-service] headers['Content-Type'] = 'application/json' [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-llm-inference-service] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] url += '?' + urlencode(query_params) [e2e-llm-inference-service] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-llm-inference-service] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-llm-inference-service] if headers['Content-Type'] == 'application/json-patch+json': [e2e-llm-inference-service] if not isinstance(body, list): [e2e-llm-inference-service] headers['Content-Type'] = \ [e2e-llm-inference-service] 'application/strategic-merge-patch+json' [e2e-llm-inference-service] request_body = None [e2e-llm-inference-service] if body is not None: [e2e-llm-inference-service] request_body = json.dumps(body) [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=False, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'multipart/form-data': [e2e-llm-inference-service] # must del headers['Content-Type'], or the correct [e2e-llm-inference-service] # Content-Type which generated by urllib3 will be [e2e-llm-inference-service] # overwritten. [e2e-llm-inference-service] del headers['Content-Type'] [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=True, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] # Pass a `string` parameter directly in the body to support [e2e-llm-inference-service] # other content types than Json when `body` argument is [e2e-llm-inference-service] # provided in serialized form [e2e-llm-inference-service] elif isinstance(body, str) or isinstance(body, bytes): [e2e-llm-inference-service] request_body = body [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] else: [e2e-llm-inference-service] # Cannot generate the request from given parameters [e2e-llm-inference-service] msg = """Cannot prepare a request message for provided [e2e-llm-inference-service] arguments. Please check that your arguments match [e2e-llm-inference-service] declared content type.""" [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] # For `GET`, `HEAD` [e2e-llm-inference-service] else: [e2e-llm-inference-service] r = self.pool_manager.request(method, url, [e2e-llm-inference-service] fields=query_params, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] except urllib3.exceptions.SSLError as e: [e2e-llm-inference-service] msg = "{0}\n{1}".format(type(e).__name__, str(e)) [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] [e2e-llm-inference-service] if _preload_content: [e2e-llm-inference-service] r = RESTResponse(r) [e2e-llm-inference-service] [e2e-llm-inference-service] # In the python 3, the response.data is bytes. [e2e-llm-inference-service] # we need to decode it to string. [e2e-llm-inference-service] if six.PY3: [e2e-llm-inference-service] r.data = r.data.decode('utf8') [e2e-llm-inference-service] [e2e-llm-inference-service] # log response body [e2e-llm-inference-service] logger.debug("response body: %s", r.data) [e2e-llm-inference-service] [e2e-llm-inference-service] if not 200 <= r.status <= 299: [e2e-llm-inference-service] > raise ApiException(http_resp=r) [e2e-llm-inference-service] E kubernetes.client.exceptions.ApiException: (500) [e2e-llm-inference-service] E Reason: Internal Server Error [e2e-llm-inference-service] E HTTP response headers: HTTPHeaderDict({'Audit-Id': '3757b445-a82f-4b4e-a23e-51307cbcd4b1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:238: ApiException [e2e-llm-inference-service] [e2e-llm-inference-service] The above exception was the direct cause of the following exception: [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', ser... {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_hpa [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-hpa", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-stop-hpa", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_stop_hpa(test_case: TestCase): [e2e-llm-inference-service] """Setting stop annotation should delete VA and HPA.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:796: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', ser... {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...o-repl-53407346'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:50:47.400080', start_time = 1777063847.4003644 [e2e-llm-inference-service] duration = 503.19486689567566, timestamp_end = '2026-04-24T20:59:10.595231' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...ator-no-repl-53407346'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f7cef6aa520> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] > out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:588: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1') [e2e-llm-inference-service] kwargs = {}, func_name = 'get_llmisvc' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:59:10.574578', start_time = 1777064350.5748386 [e2e-llm-inference-service] duration = 0.020263671875, timestamp_end = '2026-04-24T20:59:10.595103' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] name = 'autoscale-stop-hpa', namespace = 'kserve-ci-e2e-test' [e2e-llm-inference-service] version = 'v1alpha1' [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def get_llmisvc( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] name, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] version=constants.KSERVE_V1ALPHA1_VERSION, [e2e-llm-inference-service] ): [e2e-llm-inference-service] try: [e2e-llm-inference-service] return kserve_client.api_instance.get_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICE, [e2e-llm-inference-service] name, [e2e-llm-inference-service] ) [e2e-llm-inference-service] except client.rest.ApiException as e: [e2e-llm-inference-service] > raise RuntimeError( [e2e-llm-inference-service] f"❌ Exception when calling CustomObjectsApi->" [e2e-llm-inference-service] f"get_namespaced_custom_object for LLMInferenceService: {e}" [e2e-llm-inference-service] ) from e [e2e-llm-inference-service] E RuntimeError: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] E Reason: Internal Server Error [e2e-llm-inference-service] E HTTP response headers: HTTPHeaderDict({'Audit-Id': '3757b445-a82f-4b4e-a23e-51307cbcd4b1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:484: RuntimeError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-stop-h-33b391c6 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-stop-h-33b391c6 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-stop-h-33b391c6 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-53407346 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-53407346 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-53407346 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-hpa-autoscale-stop-hpa-91dfce4c in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-hpa-autoscale-stop-hpa-91dfce4c [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-hpa-autoscale-stop-hpa-91dfce4c [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_stop_hpa] [2026-04-24T20:50:47.335885] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-hpa'], prompt='KServe is a', service_name='autoscale-stop-hpa', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-h-33b391c6'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-53407346'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:50:47.348793] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-h-33b391c6'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-53407346'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:50:47.399973] end - ✅ in 0.051s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:50:47.400080] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-h-33b391c6'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-53407346'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:47.400382] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:47.405694] end - ✅ in 0.005s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:48.405973] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:48.412775] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:49.413123] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:49.420084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:50.420342] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:50.427508] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:51.427781] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:51.435229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:52.435709] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:52.443628] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:53.443955] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:53.451273] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:54.451709] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:54.458757] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:55.459047] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:55.477951] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:56.478367] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:56.487027] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:57.487346] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:57.494586] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:58.494861] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:58.501706] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:59.501993] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:59.510277] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:00.510555] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:00.517594] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:01.517886] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:01.525371] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:02.525741] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:02.533370] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:03.533749] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:03.541061] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:04.541348] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:04.548959] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:05.549368] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:05.557077] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:06.557360] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:06.647195] end - ✅ in 0.090s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:07.647582] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:07.654608] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:08.654915] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:08.662919] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:09.663177] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:09.670781] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:10.671066] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:10.678610] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:11.678994] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:11.686647] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:12.686927] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:12.694238] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:13.694522] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:13.702092] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:14.702399] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:14.709782] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:15.710081] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:15.717955] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:16.718222] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:16.725865] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:17.726320] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:17.733634] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:18.733928] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:18.741229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:19.741568] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:19.749142] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:20.749554] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:20.756742] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:21.757024] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:21.767603] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:22.767876] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:22.776125] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:23.776463] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:23.783847] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:24.784121] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:24.791347] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:25.791658] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:25.798850] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:26.799105] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:26.806278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:27.806591] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:27.813795] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:28.814130] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:28.821603] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:29.821908] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:29.852267] end - ✅ in 0.030s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:30.852617] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:30.859774] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:31.860073] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:31.867094] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:32.867438] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:32.874762] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:33.875078] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:33.882152] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:34.882551] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:34.890485] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:35.890833] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:35.898750] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:36.899030] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:36.906560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:37.906893] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:37.915744] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:38.916027] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:38.923745] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:39.924084] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:39.932237] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:40.932533] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:40.939667] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:41.939947] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:41.947863] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:42.948252] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:42.955425] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:43.955730] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:43.963457] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:44.963822] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:44.971227] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:45.971579] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:45.979367] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:46.979703] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:46.991169] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:47.991636] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:47.999042] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:48.999338] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:49.007186] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:50.007696] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:50.015357] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:51.015658] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:51.023783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:52.024092] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:52.036556] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:53.036991] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:53.045173] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:54.045482] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:54.053721] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:55.054059] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:55.061407] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:56.061853] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:56.069478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:57.069751] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:57.076640] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:58.076950] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:58.084033] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:59.084324] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:59.090858] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:00.091132] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:00.098563] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:01.098840] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:01.105884] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:02.106178] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:02.113656] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:03.113940] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:03.121547] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:04.121822] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:04.130115] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:05.130459] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:05.137512] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:06.137822] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:06.145632] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:07.145917] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:07.153566] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:08.153819] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:08.161368] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:09.161758] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:09.168783] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:10.169067] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:10.176764] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:11.177072] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:11.184149] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:12.184460] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:12.191671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:13.191946] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:13.199263] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:14.199536] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:14.207424] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:15.207679] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:15.215234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:16.215658] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:16.223468] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:17.223775] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:17.231216] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:18.231592] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:18.239153] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:19.239470] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:19.246766] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:20.247172] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:20.254626] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:21.254928] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:21.262031] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:22.262347] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:22.269971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:23.270265] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:23.281117] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:24.281391] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:24.288369] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:25.288652] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:25.296252] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:26.296533] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:26.303919] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:27.304335] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:27.312107] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:28.312470] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:28.319685] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:29.320011] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:29.327684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:30.327972] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:30.335762] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:31.336061] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:31.343157] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:32.343452] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:32.350814] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:33.351128] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:33.358614] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:34.358929] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:34.366386] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:35.366635] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:35.373983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:36.374279] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:36.381768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:37.382065] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:37.389789] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:38.390091] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:38.396942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:39.397197] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:39.404169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:40.404436] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:40.411807] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:41.412057] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:41.420204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:42.420514] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:42.428009] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:43.428245] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:43.435284] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:44.435589] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:44.442927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:45.443196] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:45.450729] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:46.451025] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:46.458119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:47.458420] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:47.465611] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:48.465861] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:48.472749] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:49.473000] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:49.483784] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:50.484045] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:50.491281] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:51.491580] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:51.498385] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:52.498660] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:52.506108] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:53.506354] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:53.513555] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:54.513823] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:54.522047] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:55.522343] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:55.529134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:56.529356] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:56.536794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:57.537103] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:57.544066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:58.544339] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:58.551575] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:59.551846] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:59.558822] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:00.559163] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:00.567527] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:01.568111] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:01.575034] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:02.575339] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:02.582494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:03.582805] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:03.590433] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:04.590752] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:04.598876] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:05.599172] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:05.607033] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:06.607361] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:06.689467] end - ✅ in 0.082s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:07.689761] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:07.701625] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:08.702020] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:08.713454] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:09.713835] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:09.721236] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:10.721541] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:10.728988] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:11.729412] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:11.736392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:12.736630] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:12.743781] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:13.744077] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:13.750683] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:14.751025] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:14.757735] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:15.758092] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:15.764854] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:16.765147] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:16.771602] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:17.771862] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:17.778895] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:18.779166] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:18.786059] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:19.786358] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:19.793526] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:20.793940] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:20.800615] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:21.800869] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:21.809262] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:22.809555] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:22.816089] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:23.816343] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:23.822749] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:24.823012] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:24.829408] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:25.829689] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:25.836352] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:26.836613] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:26.843146] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:27.843469] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:27.861554] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:28.861884] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:28.868799] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:29.869174] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:29.875901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:30.876209] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:30.882769] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:31.883038] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:31.890525] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:32.890810] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:32.898065] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:33.898372] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:33.905069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:34.905376] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:34.911964] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:35.912216] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:35.918775] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:36.919037] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:36.925693] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:37.925956] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:37.932589] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:38.932860] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:38.942234] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:39.942572] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:39.949550] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:40.949852] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:40.956761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:41.957062] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:41.964761] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:42.965071] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:42.972159] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:43.972456] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:43.979607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:44.979869] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:44.986759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:45.987073] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:45.993484] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:46.993795] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:47.000359] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:48.000698] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:48.007451] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:49.007730] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:49.014208] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:50.014481] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:50.027194] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:51.027532] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:51.034392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:52.034636] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:52.041372] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:53.041751] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:53.048439] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:54.048674] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:54.055345] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:55.055611] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:55.062109] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:56.062469] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:56.071664] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:57.071952] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:57.078456] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:58.078700] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:58.085068] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:59.085346] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:59.091816] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:00.092122] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:00.098411] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:01.098709] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:01.105257] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:02.105585] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:02.111944] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:03.112235] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:03.122350] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:04.122637] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:04.129390] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:05.129713] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:05.136495] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:06.136830] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:06.143558] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:07.143939] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:07.150651] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:08.150939] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:08.157531] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:09.157891] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:09.164972] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:10.165383] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:10.172407] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:11.172694] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:11.179588] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:12.179835] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:12.186419] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:13.186695] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:13.193446] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:14.193707] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:14.201089] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:15.201394] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:15.208863] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:16.209130] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:16.217530] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:17.217942] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:17.225053] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:18.225337] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:18.232560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:19.232812] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:19.240129] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:20.240352] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:20.247779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:21.248032] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:21.256701] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:22.256975] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:22.264248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:23.264536] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:23.271700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:24.271984] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:24.279086] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:25.279347] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:25.287731] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:26.288034] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:26.295700] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:27.295994] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:27.303730] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:28.304069] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:28.312017] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:29.312290] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:29.319423] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:30.319714] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:30.327836] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:31.328089] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:31.336517] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:32.336827] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:32.344765] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:33.345040] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:33.352158] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:34.352620] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:34.359814] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:35.360109] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:35.367979] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:36.368274] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:36.375383] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:37.375660] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:37.383246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:38.383523] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:38.391007] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:39.391287] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:39.399990] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:40.400258] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:40.408198] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:41.408528] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:41.415688] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:42.416001] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:42.423170] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:43.423489] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:43.430523] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:44.430820] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:44.438484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:45.438798] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:45.446140] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:46.446560] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:46.453550] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:47.453881] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:47.461830] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:48.462104] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:48.468842] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:49.469180] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:49.476508] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:50.476850] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:50.484118] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:51.484519] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:51.491948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:52.492498] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:52.499954] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:53.500283] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:53.507856] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:54.508223] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:54.515692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:55.515967] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:55.523086] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:56.523348] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:56.530264] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:57.530603] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:57.537916] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:58.538204] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:58.546208] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:59.546519] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:59.553570] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:00.553817] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:00.560714] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:01.561053] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:01.568261] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:02.568591] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:02.577845] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:03.578157] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:03.585841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:04.586107] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:04.593872] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:05.594154] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:05.601589] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:06.601850] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:06.609541] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:07.609906] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:07.617384] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:08.617660] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:08.625085] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:09.625383] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:09.633402] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:10.633795] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:10.641826] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:11.642060] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:11.649172] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:12.649482] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:12.657507] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:13.657948] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:13.665657] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:14.665991] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:14.674940] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:15.675268] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:15.682824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:16.683149] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:16.690020] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:17.690276] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:17.697420] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:18.697720] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:18.708946] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:19.709251] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:19.717920] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:20.718246] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:20.725248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:21.725618] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:21.732200] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:22.732598] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:22.739390] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:23.739701] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:23.747171] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:24.747502] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:24.754764] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:25.755127] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:25.762535] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:26.762873] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:26.770798] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:27.771105] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:27.778488] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:28.778819] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:28.786146] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:29.786576] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:29.793819] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:30.794078] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:30.801057] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:31.801346] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:31.809006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:32.809264] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:32.816643] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:33.816953] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:33.824549] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:34.824819] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:34.832204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:35.832520] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:35.839643] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:36.839974] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:36.847607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:37.848173] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:37.855405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:38.855717] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:38.863442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:39.863804] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:39.871254] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:40.871684] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:40.879460] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:41.879826] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:41.886768] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:42.887026] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:42.894769] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:43.895048] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:43.902691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:44.903117] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:44.913565] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:45.913854] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:45.921395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:46.921695] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:46.929130] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:47.929693] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:47.936977] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:48.937291] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:48.944685] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:49.944986] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:49.952516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:50.952867] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:50.960216] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:51.960577] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:51.968131] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:52.968506] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:52.975602] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:53.975909] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:53.984546] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:54.984860] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:55.002855] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:56.003168] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:56.010174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:57.010464] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:57.016925] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:58.017230] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:58.024547] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:59.025002] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:59.032402] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:00.032742] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:00.040829] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:01.041285] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:01.048409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:02.048730] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:02.056330] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:03.056687] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:03.063937] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:04.064334] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:04.072098] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:05.072452] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:05.078968] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:06.079237] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:06.086457] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:07.086724] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:07.093763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:08.094116] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:08.106057] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:09.106380] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:09.115110] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:10.115441] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:10.123189] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:11.123562] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:11.129926] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:12.130222] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:12.136845] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:13.137122] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:13.143829] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:14.144137] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:14.151628] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:15.151956] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:15.158378] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:16.158702] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:16.165962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:17.166329] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:17.173231] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:18.173577] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:18.180898] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:19.181199] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:19.188972] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:20.189327] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:20.198680] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:21.198947] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:21.206289] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:22.206636] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:22.215143] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:23.215612] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:23.222440] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:24.222733] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:24.229435] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:25.229822] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:25.237783] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:26.238098] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:26.245331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:27.245624] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:27.253168] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:28.253493] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:28.260796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:29.261073] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:29.268620] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:30.268954] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:30.276198] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:31.276481] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:31.283589] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:32.283946] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:32.291255] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:33.291605] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:33.299606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:34.299908] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:34.306885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:35.307177] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:35.314615] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:36.314879] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:36.322638] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:37.322911] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:37.330266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:38.330687] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:38.338036] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:39.338389] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:39.345925] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:40.346237] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:40.353955] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:41.354344] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:41.360791] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:42.361141] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:42.368816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:43.369190] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:43.377064] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:44.377438] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:44.384223] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:45.384517] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:45.393058] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:46.393508] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:46.401194] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:47.401515] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:47.409574] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:48.409845] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:48.416977] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:49.417263] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:49.424810] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:50.425099] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:50.432147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:51.432507] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:51.440445] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:52.440847] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:52.448272] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:53.448726] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:53.456274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:54.456665] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:54.463879] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:55.464258] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:55.474526] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:56.474852] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:56.482052] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:57.482376] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:57.490105] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:58.490433] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:58.497113] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:59.497560] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:59.504823] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:00.505195] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:00.513117] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:01.513506] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:01.520990] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:02.521433] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:02.531261] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:03.531689] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:03.539730] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:04.540036] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:04.548412] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:05.548831] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:05.556343] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:06.556695] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:06.564184] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:07.564522] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:07.571593] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:08.571877] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:08.583475] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:09.583797] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:09.590964] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:10.591194] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:10.597771] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:11.598078] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:11.605229] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:12.605506] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:12.612910] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:13.613342] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:13.621496] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:14.621752] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:14.639713] end - ✅ in 0.018s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:15.640000] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:15.646505] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:16.646743] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:16.653208] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:17.653534] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:17.660188] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:18.660573] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:18.667619] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:19.667955] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:19.674282] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:20.674628] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:20.681363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:21.681671] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:21.687805] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:22.688044] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:22.694376] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:23.694702] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:23.701809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:24.702157] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:24.709603] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:25.710001] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:25.718176] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:26.718425] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:26.725217] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:27.725561] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:27.732357] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:28.732676] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:28.739326] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:29.739585] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:29.746981] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:30.747573] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:30.754752] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:31.755224] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:31.762079] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:32.762468] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:32.769107] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:33.769463] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:33.777238] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:34.777545] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:34.785642] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:35.785982] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:35.793398] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:36.793729] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:36.801002] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:37.801338] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:37.809121] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:38.809465] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:38.816641] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:39.816951] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:39.824393] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:40.824755] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:40.832290] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:41.832610] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:41.840200] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:42.840522] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:42.848084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:43.848504] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:43.855346] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:44.855653] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:44.862513] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:45.862840] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:45.870519] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:46.870824] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:46.877997] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:47.878329] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:47.885668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:48.885998] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:48.892840] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:49.893130] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:49.899878] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:50.900155] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:50.907850] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:51.908111] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:51.916085] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:52.916411] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:52.923628] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:53.924084] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:53.931821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:54.932165] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:54.942350] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:55.942630] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:55.958027] end - ✅ in 0.015s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:56.958360] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:56.971174] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:57.971581] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:57.978785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:58.979075] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:58.986530] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:59.986838] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:59.993813] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:00.994099] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:01.001424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:02.001780] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:02.008984] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:03.009340] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:03.017723] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:04.018014] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:04.026643] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:05.026957] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:05.034813] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:06.035109] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:06.043196] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:07.043543] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:07.050907] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:08.051275] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:08.058936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:09.059339] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:09.067051] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:10.067438] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:10.075279] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:11.075627] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:11.083429] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:12.083722] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:12.090784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:13.091238] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:13.099046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:14.099530] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:14.107794] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:15.108165] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:15.115505] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:16.115813] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:16.123455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:17.123756] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:17.131694] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:18.131963] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:18.139483] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:19.139907] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:19.148264] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:20.148672] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:20.157068] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:21.157537] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:21.165362] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:22.165737] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:22.174067] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:23.174344] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:23.182995] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:24.183327] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:24.190918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:25.191175] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:25.197666] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:26.197933] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:26.204594] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:27.204879] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:27.212769] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:28.213071] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:28.220406] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:29.220726] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:29.228333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:30.228675] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:30.236916] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:31.237254] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:31.244728] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:32.245023] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:32.253207] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:33.253576] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:33.261225] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:34.261649] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:34.269929] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:35.270263] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:35.278579] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:36.278905] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:36.288233] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:37.288703] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:37.296136] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:38.296477] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:38.303894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:39.304187] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:39.312268] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:40.312575] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:40.319932] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:41.320266] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:41.327884] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:42.328352] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:42.336057] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:43.336545] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:43.344416] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:44.344718] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:44.352972] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:45.353339] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:45.361168] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:46.361534] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:46.369430] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:47.369826] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:47.376891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:48.377219] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:48.384718] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:49.385002] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:49.394399] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:50.394712] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:50.402028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:51.402348] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:51.410289] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:52.410604] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:52.418618] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:53.419001] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:53.431430] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:54.431788] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:54.439522] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:55.439943] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:55.447447] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:56.447839] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:56.456671] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:57.456980] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:57.464104] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:58.464488] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:58.471845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:59.472157] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:59.479609] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:00.479877] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:00.487811] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:01.488097] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:01.496046] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:02.496352] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:02.504030] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:03.504325] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:03.514398] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:04.514690] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:04.521843] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:05.522098] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:05.529294] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:06.529664] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:06.545692] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:07.545938] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:07.554669] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:08.555159] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:08.565923] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:09.566289] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:09.574252] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'RouterReady', 'WorkloadsReady'}, got [{'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:50:53Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-hpa-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:10.574578] start - args=(, 'autoscale-stop-hpa', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [get_llmisvc] [2026-04-24T20:59:10.595103] end - ❌ 0.020s: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '3757b445-a82f-4b4e-a23e-51307cbcd4b1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:59:10.595231] end - ❌ 503.195s: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '3757b445-a82f-4b4e-a23e-51307cbcd4b1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:59:10.595327] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-hpa', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-h-33b391c6'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-53407346'}, [e2e-llm-inference-service] {'name': 'scaling-hpa-autoscale-stop-hpa-91dfce4c'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [delete_llmisvc] [2026-04-24T20:59:10.613539] end - ❌ 0.018s: ❌ Exception when calling CustomObjectsApi->delete_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': 'c97e407b-11b3-4766-9bbc-2d2f4c67fa87', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] WARNING e2e.llmisvc.test_llm_autoscaling:test_llm_autoscaling.py:306 Failed to cleanup service: ❌ Exception when calling CustomObjectsApi->delete_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': 'c97e407b-11b3-4766-9bbc-2d2f4c67fa87', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_stop_hpa] [2026-04-24T20:59:10.613668] end - ❌ 503.277s: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '3757b445-a82f-4b4e-a23e-51307cbcd4b1', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] _ test_llm_autoscaling_stop_keda[router-managed-workload-llmd-simulator-no-replicas-scaling-keda] _ [e2e-llm-inference-service] [gw1] linux -- Python 3.11.13 /workspace/source/python/kserve/.venv/bin/python [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] name = 'autoscale-stop-keda', namespace = 'kserve-ci-e2e-test' [e2e-llm-inference-service] version = 'v1alpha1' [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def get_llmisvc( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] name, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] version=constants.KSERVE_V1ALPHA1_VERSION, [e2e-llm-inference-service] ): [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return kserve_client.api_instance.get_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICE, [e2e-llm-inference-service] name, [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:476: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceservices' [e2e-llm-inference-service] name = 'autoscale-stop-keda', kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] [e2e-llm-inference-service] def get_namespaced_custom_object(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-llm-inference-service] """get_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Returns a namespace scoped custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.get_namespaced_custom_object(group, version, namespace, plural, name, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: the custom resource's group (required) [e2e-llm-inference-service] :param str version: the custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param str name: the custom object's name (required) [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: object [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] kwargs['_return_http_data_only'] = True [e2e-llm-inference-service] > return self.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, **kwargs) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1632: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] group = 'serving.kserve.io', version = 'v1alpha1' [e2e-llm-inference-service] namespace = 'kserve-ci-e2e-test', plural = 'llminferenceservices' [e2e-llm-inference-service] name = 'autoscale-stop-keda', kwargs = {'_return_http_data_only': True} [e2e-llm-inference-service] local_var_params = {'_return_http_data_only': True, 'all_params': ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...], 'auth_settings': ['BearerToken'], 'body_params': None, ...} [e2e-llm-inference-service] all_params = ['group', 'version', 'namespace', 'plural', 'name', 'async_req', ...] [e2e-llm-inference-service] key = '_return_http_data_only', val = True, collection_formats = {} [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'name': 'autoscale-stop-keda', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceservices', ...} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] def get_namespaced_custom_object_with_http_info(self, group, version, namespace, plural, name, **kwargs): # noqa: E501 [e2e-llm-inference-service] """get_namespaced_custom_object # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] Returns a namespace scoped custom object # noqa: E501 [e2e-llm-inference-service] This method makes a synchronous HTTP request by default. To make an [e2e-llm-inference-service] asynchronous HTTP request, please pass async_req=True [e2e-llm-inference-service] >>> thread = api.get_namespaced_custom_object_with_http_info(group, version, namespace, plural, name, async_req=True) [e2e-llm-inference-service] >>> result = thread.get() [e2e-llm-inference-service] [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param str group: the custom resource's group (required) [e2e-llm-inference-service] :param str version: the custom resource's version (required) [e2e-llm-inference-service] :param str namespace: The custom resource's namespace (required) [e2e-llm-inference-service] :param str plural: the custom resource's plural name. For TPRs this would be lowercase plural kind. (required) [e2e-llm-inference-service] :param str name: the custom object's name (required) [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: tuple(object, status_code(int), headers(HTTPHeaderDict)) [e2e-llm-inference-service] If the method is called asynchronously, [e2e-llm-inference-service] returns the request thread. [e2e-llm-inference-service] """ [e2e-llm-inference-service] [e2e-llm-inference-service] local_var_params = locals() [e2e-llm-inference-service] [e2e-llm-inference-service] all_params = [ [e2e-llm-inference-service] 'group', [e2e-llm-inference-service] 'version', [e2e-llm-inference-service] 'namespace', [e2e-llm-inference-service] 'plural', [e2e-llm-inference-service] 'name' [e2e-llm-inference-service] ] [e2e-llm-inference-service] all_params.extend( [e2e-llm-inference-service] [ [e2e-llm-inference-service] 'async_req', [e2e-llm-inference-service] '_return_http_data_only', [e2e-llm-inference-service] '_preload_content', [e2e-llm-inference-service] '_request_timeout' [e2e-llm-inference-service] ] [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] for key, val in six.iteritems(local_var_params['kwargs']): [e2e-llm-inference-service] if key not in all_params: [e2e-llm-inference-service] raise ApiTypeError( [e2e-llm-inference-service] "Got an unexpected keyword argument '%s'" [e2e-llm-inference-service] " to method get_namespaced_custom_object" % key [e2e-llm-inference-service] ) [e2e-llm-inference-service] local_var_params[key] = val [e2e-llm-inference-service] del local_var_params['kwargs'] [e2e-llm-inference-service] # verify the required parameter 'group' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('group' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['group'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `group` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'version' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('version' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['version'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `version` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'namespace' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('namespace' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['namespace'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `namespace` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'plural' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('plural' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['plural'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `plural` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] # verify the required parameter 'name' is set [e2e-llm-inference-service] if self.api_client.client_side_validation and ('name' not in local_var_params or # noqa: E501 [e2e-llm-inference-service] local_var_params['name'] is None): # noqa: E501 [e2e-llm-inference-service] raise ApiValueError("Missing the required parameter `name` when calling `get_namespaced_custom_object`") # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] collection_formats = {} [e2e-llm-inference-service] [e2e-llm-inference-service] path_params = {} [e2e-llm-inference-service] if 'group' in local_var_params: [e2e-llm-inference-service] path_params['group'] = local_var_params['group'] # noqa: E501 [e2e-llm-inference-service] if 'version' in local_var_params: [e2e-llm-inference-service] path_params['version'] = local_var_params['version'] # noqa: E501 [e2e-llm-inference-service] if 'namespace' in local_var_params: [e2e-llm-inference-service] path_params['namespace'] = local_var_params['namespace'] # noqa: E501 [e2e-llm-inference-service] if 'plural' in local_var_params: [e2e-llm-inference-service] path_params['plural'] = local_var_params['plural'] # noqa: E501 [e2e-llm-inference-service] if 'name' in local_var_params: [e2e-llm-inference-service] path_params['name'] = local_var_params['name'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] [e2e-llm-inference-service] header_params = {} [e2e-llm-inference-service] [e2e-llm-inference-service] form_params = [] [e2e-llm-inference-service] local_var_files = {} [e2e-llm-inference-service] [e2e-llm-inference-service] body_params = None [e2e-llm-inference-service] # HTTP header `Accept` [e2e-llm-inference-service] header_params['Accept'] = self.api_client.select_header_accept( [e2e-llm-inference-service] ['application/json']) # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] # Authentication setting [e2e-llm-inference-service] auth_settings = ['BearerToken'] # noqa: E501 [e2e-llm-inference-service] [e2e-llm-inference-service] > return self.api_client.call_api( [e2e-llm-inference-service] '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}', 'GET', [e2e-llm-inference-service] path_params, [e2e-llm-inference-service] query_params, [e2e-llm-inference-service] header_params, [e2e-llm-inference-service] body=body_params, [e2e-llm-inference-service] post_params=form_params, [e2e-llm-inference-service] files=local_var_files, [e2e-llm-inference-service] response_type='object', # noqa: E501 [e2e-llm-inference-service] auth_settings=auth_settings, [e2e-llm-inference-service] async_req=local_var_params.get('async_req'), [e2e-llm-inference-service] _return_http_data_only=local_var_params.get('_return_http_data_only'), # noqa: E501 [e2e-llm-inference-service] _preload_content=local_var_params.get('_preload_content', True), [e2e-llm-inference-service] _request_timeout=local_var_params.get('_request_timeout'), [e2e-llm-inference-service] collection_formats=collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api/custom_objects_api.py:1739: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/{group}/{version}/namespaces/{namespace}/{plural}/{name}' [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] path_params = {'group': 'serving.kserve.io', 'name': 'autoscale-stop-keda', 'namespace': 'kserve-ci-e2e-test', 'plural': 'llminferenceservices', ...} [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], async_req = None, _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def call_api(self, resource_path, method, [e2e-llm-inference-service] path_params=None, query_params=None, header_params=None, [e2e-llm-inference-service] body=None, post_params=None, files=None, [e2e-llm-inference-service] response_type=None, auth_settings=None, async_req=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] """Makes the HTTP request (synchronous) and returns deserialized data. [e2e-llm-inference-service] [e2e-llm-inference-service] To make an async_req request, set the async_req parameter. [e2e-llm-inference-service] [e2e-llm-inference-service] :param resource_path: Path to method endpoint. [e2e-llm-inference-service] :param method: Method to call. [e2e-llm-inference-service] :param path_params: Path parameters in the url. [e2e-llm-inference-service] :param query_params: Query parameters in the url. [e2e-llm-inference-service] :param header_params: Header parameters to be [e2e-llm-inference-service] placed in the request header. [e2e-llm-inference-service] :param body: Request body. [e2e-llm-inference-service] :param post_params dict: Request post form parameters, [e2e-llm-inference-service] for `application/x-www-form-urlencoded`, `multipart/form-data`. [e2e-llm-inference-service] :param auth_settings list: Auth Settings names for the request. [e2e-llm-inference-service] :param response: Response data type. [e2e-llm-inference-service] :param files dict: key -> filename, value -> filepath, [e2e-llm-inference-service] for `multipart/form-data`. [e2e-llm-inference-service] :param async_req bool: execute request asynchronously [e2e-llm-inference-service] :param _return_http_data_only: response data without head status code [e2e-llm-inference-service] and headers [e2e-llm-inference-service] :param collection_formats: dict of collection formats for path, query, [e2e-llm-inference-service] header, and post parameters. [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] :return: [e2e-llm-inference-service] If async_req parameter is True, [e2e-llm-inference-service] the request will be called asynchronously. [e2e-llm-inference-service] The method will return the request thread. [e2e-llm-inference-service] If parameter async_req is False or missing, [e2e-llm-inference-service] then the method will return the response directly. [e2e-llm-inference-service] """ [e2e-llm-inference-service] if not async_req: [e2e-llm-inference-service] > return self.__call_api(resource_path, method, [e2e-llm-inference-service] path_params, query_params, header_params, [e2e-llm-inference-service] body, post_params, files, [e2e-llm-inference-service] response_type, auth_settings, [e2e-llm-inference-service] _return_http_data_only, collection_formats, [e2e-llm-inference-service] _preload_content, _request_timeout, _host) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:348: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] resource_path = '/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-keda' [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] path_params = [('group', 'serving.kserve.io'), ('version', 'v1alpha1'), ('namespace', 'kserve-ci-e2e-test'), ('plural', 'llminferenceservices'), ('name', 'autoscale-stop-keda')] [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] header_params = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = [], files = {}, response_type = 'object' [e2e-llm-inference-service] auth_settings = ['BearerToken'], _return_http_data_only = True [e2e-llm-inference-service] collection_formats = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] _host = None [e2e-llm-inference-service] [e2e-llm-inference-service] def __call_api( [e2e-llm-inference-service] self, resource_path, method, path_params=None, [e2e-llm-inference-service] query_params=None, header_params=None, body=None, post_params=None, [e2e-llm-inference-service] files=None, response_type=None, auth_settings=None, [e2e-llm-inference-service] _return_http_data_only=None, collection_formats=None, [e2e-llm-inference-service] _preload_content=True, _request_timeout=None, _host=None): [e2e-llm-inference-service] [e2e-llm-inference-service] config = self.configuration [e2e-llm-inference-service] [e2e-llm-inference-service] # header parameters [e2e-llm-inference-service] header_params = header_params or {} [e2e-llm-inference-service] header_params.update(self.default_headers) [e2e-llm-inference-service] if self.cookie: [e2e-llm-inference-service] header_params['Cookie'] = self.cookie [e2e-llm-inference-service] if header_params: [e2e-llm-inference-service] header_params = self.sanitize_for_serialization(header_params) [e2e-llm-inference-service] header_params = dict(self.parameters_to_tuples(header_params, [e2e-llm-inference-service] collection_formats)) [e2e-llm-inference-service] [e2e-llm-inference-service] # path parameters [e2e-llm-inference-service] if path_params: [e2e-llm-inference-service] path_params = self.sanitize_for_serialization(path_params) [e2e-llm-inference-service] path_params = self.parameters_to_tuples(path_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] for k, v in path_params: [e2e-llm-inference-service] # specified safe chars, encode everything [e2e-llm-inference-service] resource_path = resource_path.replace( [e2e-llm-inference-service] '{%s}' % k, [e2e-llm-inference-service] quote(str(v), safe=config.safe_chars_for_path_param) [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] # query parameters [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] query_params = self.sanitize_for_serialization(query_params) [e2e-llm-inference-service] query_params = self.parameters_to_tuples(query_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] [e2e-llm-inference-service] # post parameters [e2e-llm-inference-service] if post_params or files: [e2e-llm-inference-service] post_params = post_params if post_params else [] [e2e-llm-inference-service] post_params = self.sanitize_for_serialization(post_params) [e2e-llm-inference-service] post_params = self.parameters_to_tuples(post_params, [e2e-llm-inference-service] collection_formats) [e2e-llm-inference-service] post_params.extend(self.files_parameters(files)) [e2e-llm-inference-service] [e2e-llm-inference-service] # auth setting [e2e-llm-inference-service] self.update_params_for_auth(header_params, query_params, auth_settings) [e2e-llm-inference-service] [e2e-llm-inference-service] # body [e2e-llm-inference-service] if body: [e2e-llm-inference-service] body = self.sanitize_for_serialization(body) [e2e-llm-inference-service] [e2e-llm-inference-service] # request url [e2e-llm-inference-service] if _host is None: [e2e-llm-inference-service] url = self.configuration.host + resource_path [e2e-llm-inference-service] else: [e2e-llm-inference-service] # use server/host defined in path or operation instead [e2e-llm-inference-service] url = _host + resource_path [e2e-llm-inference-service] [e2e-llm-inference-service] # perform request and return response [e2e-llm-inference-service] > response_data = self.request( [e2e-llm-inference-service] method, url, query_params=query_params, headers=header_params, [e2e-llm-inference-service] post_params=post_params, body=body, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:180: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-keda' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] post_params = [], body = None, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] post_params=None, body=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Makes the HTTP request using RESTClient.""" [e2e-llm-inference-service] if method == "GET": [e2e-llm-inference-service] > return self.rest_client.GET(url, [e2e-llm-inference-service] query_params=query_params, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/api_client.py:373: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-keda' [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] query_params = [], _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def GET(self, url, headers=None, query_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] > return self.request("GET", url, [e2e-llm-inference-service] headers=headers, [e2e-llm-inference-service] _preload_content=_preload_content, [e2e-llm-inference-service] _request_timeout=_request_timeout, [e2e-llm-inference-service] query_params=query_params) [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:244: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] self = [e2e-llm-inference-service] method = 'GET' [e2e-llm-inference-service] url = 'https://a75167836e0a148f398e4ac105296cf9-76426e05911553ae.elb.us-east-1.amazonaws.com:6443/apis/serving.kserve.io/v1alpha1/namespaces/kserve-ci-e2e-test/llminferenceservices/autoscale-stop-keda' [e2e-llm-inference-service] query_params = [] [e2e-llm-inference-service] headers = {'Accept': 'application/json', 'Content-Type': 'application/json', 'User-Agent': 'OpenAPI-Generator/32.0.1/python'} [e2e-llm-inference-service] body = None, post_params = {}, _preload_content = True, _request_timeout = None [e2e-llm-inference-service] [e2e-llm-inference-service] def request(self, method, url, query_params=None, headers=None, [e2e-llm-inference-service] body=None, post_params=None, _preload_content=True, [e2e-llm-inference-service] _request_timeout=None): [e2e-llm-inference-service] """Perform requests. [e2e-llm-inference-service] [e2e-llm-inference-service] :param method: http request method [e2e-llm-inference-service] :param url: http request url [e2e-llm-inference-service] :param query_params: query parameters in the url [e2e-llm-inference-service] :param headers: http request headers [e2e-llm-inference-service] :param body: request json body, for `application/json` [e2e-llm-inference-service] :param post_params: request post parameters, [e2e-llm-inference-service] `application/x-www-form-urlencoded` [e2e-llm-inference-service] and `multipart/form-data` [e2e-llm-inference-service] :param _preload_content: if False, the urllib3.HTTPResponse object will [e2e-llm-inference-service] be returned without reading/decoding response [e2e-llm-inference-service] data. Default is True. [e2e-llm-inference-service] :param _request_timeout: timeout setting for this request. If one [e2e-llm-inference-service] number provided, it will be total request [e2e-llm-inference-service] timeout. It can also be a pair (tuple) of [e2e-llm-inference-service] (connection, read) timeouts. [e2e-llm-inference-service] """ [e2e-llm-inference-service] method = method.upper() [e2e-llm-inference-service] assert method in ['GET', 'HEAD', 'DELETE', 'POST', 'PUT', [e2e-llm-inference-service] 'PATCH', 'OPTIONS'] [e2e-llm-inference-service] [e2e-llm-inference-service] if post_params and body: [e2e-llm-inference-service] raise ApiValueError( [e2e-llm-inference-service] "body parameter cannot be used with post_params parameter." [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] post_params = post_params or {} [e2e-llm-inference-service] headers = headers or {} [e2e-llm-inference-service] [e2e-llm-inference-service] timeout = None [e2e-llm-inference-service] if _request_timeout: [e2e-llm-inference-service] if isinstance(_request_timeout, (int, ) if six.PY3 else (int, long)): # noqa: E501,F821 [e2e-llm-inference-service] timeout = urllib3.Timeout(total=_request_timeout) [e2e-llm-inference-service] elif (isinstance(_request_timeout, tuple) and [e2e-llm-inference-service] len(_request_timeout) == 2): [e2e-llm-inference-service] timeout = urllib3.Timeout( [e2e-llm-inference-service] connect=_request_timeout[0], read=_request_timeout[1]) [e2e-llm-inference-service] [e2e-llm-inference-service] if 'Content-Type' not in headers: [e2e-llm-inference-service] headers['Content-Type'] = 'application/json' [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] # For `POST`, `PUT`, `PATCH`, `OPTIONS`, `DELETE` [e2e-llm-inference-service] if method in ['POST', 'PUT', 'PATCH', 'OPTIONS', 'DELETE']: [e2e-llm-inference-service] if query_params: [e2e-llm-inference-service] url += '?' + urlencode(query_params) [e2e-llm-inference-service] if (re.search('json', headers['Content-Type'], re.IGNORECASE) or [e2e-llm-inference-service] headers['Content-Type'] == 'application/apply-patch+yaml'): [e2e-llm-inference-service] if headers['Content-Type'] == 'application/json-patch+json': [e2e-llm-inference-service] if not isinstance(body, list): [e2e-llm-inference-service] headers['Content-Type'] = \ [e2e-llm-inference-service] 'application/strategic-merge-patch+json' [e2e-llm-inference-service] request_body = None [e2e-llm-inference-service] if body is not None: [e2e-llm-inference-service] request_body = json.dumps(body) [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'application/x-www-form-urlencoded': # noqa: E501 [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=False, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] elif headers['Content-Type'] == 'multipart/form-data': [e2e-llm-inference-service] # must del headers['Content-Type'], or the correct [e2e-llm-inference-service] # Content-Type which generated by urllib3 will be [e2e-llm-inference-service] # overwritten. [e2e-llm-inference-service] del headers['Content-Type'] [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] fields=post_params, [e2e-llm-inference-service] encode_multipart=True, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] # Pass a `string` parameter directly in the body to support [e2e-llm-inference-service] # other content types than Json when `body` argument is [e2e-llm-inference-service] # provided in serialized form [e2e-llm-inference-service] elif isinstance(body, str) or isinstance(body, bytes): [e2e-llm-inference-service] request_body = body [e2e-llm-inference-service] r = self.pool_manager.request( [e2e-llm-inference-service] method, url, [e2e-llm-inference-service] body=request_body, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] else: [e2e-llm-inference-service] # Cannot generate the request from given parameters [e2e-llm-inference-service] msg = """Cannot prepare a request message for provided [e2e-llm-inference-service] arguments. Please check that your arguments match [e2e-llm-inference-service] declared content type.""" [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] # For `GET`, `HEAD` [e2e-llm-inference-service] else: [e2e-llm-inference-service] r = self.pool_manager.request(method, url, [e2e-llm-inference-service] fields=query_params, [e2e-llm-inference-service] preload_content=_preload_content, [e2e-llm-inference-service] timeout=timeout, [e2e-llm-inference-service] headers=headers) [e2e-llm-inference-service] except urllib3.exceptions.SSLError as e: [e2e-llm-inference-service] msg = "{0}\n{1}".format(type(e).__name__, str(e)) [e2e-llm-inference-service] raise ApiException(status=0, reason=msg) [e2e-llm-inference-service] [e2e-llm-inference-service] if _preload_content: [e2e-llm-inference-service] r = RESTResponse(r) [e2e-llm-inference-service] [e2e-llm-inference-service] # In the python 3, the response.data is bytes. [e2e-llm-inference-service] # we need to decode it to string. [e2e-llm-inference-service] if six.PY3: [e2e-llm-inference-service] r.data = r.data.decode('utf8') [e2e-llm-inference-service] [e2e-llm-inference-service] # log response body [e2e-llm-inference-service] logger.debug("response body: %s", r.data) [e2e-llm-inference-service] [e2e-llm-inference-service] if not 200 <= r.status <= 299: [e2e-llm-inference-service] > raise ApiException(http_resp=r) [e2e-llm-inference-service] E kubernetes.client.exceptions.ApiException: (500) [e2e-llm-inference-service] E Reason: Internal Server Error [e2e-llm-inference-service] E HTTP response headers: HTTPHeaderDict({'Audit-Id': '74aa7c35-68f9-4b19-91c2-22c071f8bad8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] ../../python/kserve/.venv/lib64/python3.11/site-packages/kubernetes/client/rest.py:238: ApiException [e2e-llm-inference-service] [e2e-llm-inference-service] The above exception was the direct cause of the following exception: [e2e-llm-inference-service] [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] @pytest.mark.autoscaling [e2e-llm-inference-service] @pytest.mark.autoscaling_keda [e2e-llm-inference-service] @pytest.mark.parametrize( [e2e-llm-inference-service] "test_case", [e2e-llm-inference-service] [ [e2e-llm-inference-service] pytest.param( [e2e-llm-inference-service] TestCase( [e2e-llm-inference-service] base_refs=[ [e2e-llm-inference-service] "router-managed", [e2e-llm-inference-service] "workload-llmd-simulator-no-replicas", [e2e-llm-inference-service] "scaling-keda", [e2e-llm-inference-service] ], [e2e-llm-inference-service] prompt="KServe is a", [e2e-llm-inference-service] service_name="autoscale-stop-keda", [e2e-llm-inference-service] ), [e2e-llm-inference-service] marks=[ [e2e-llm-inference-service] pytest.mark.cluster_cpu, [e2e-llm-inference-service] pytest.mark.cluster_single_node, [e2e-llm-inference-service] pytest.mark.llmd_simulator, [e2e-llm-inference-service] ], [e2e-llm-inference-service] ), [e2e-llm-inference-service] ], [e2e-llm-inference-service] indirect=["test_case"], [e2e-llm-inference-service] ids=generate_test_id, [e2e-llm-inference-service] ) [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def test_llm_autoscaling_stop_keda(test_case: TestCase): [e2e-llm-inference-service] """Setting stop annotation should delete VA and ScaledObject.""" [e2e-llm-inference-service] inject_k8s_proxy() [e2e-llm-inference-service] kserve_client = _new_kserve_client() [e2e-llm-inference-service] service_name = test_case.llm_service.metadata.name [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > _create_and_wait(kserve_client, test_case) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:855: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] test_case = TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', se... {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m') [e2e-llm-inference-service] [e2e-llm-inference-service] def _create_and_wait(kserve_client, test_case): [e2e-llm-inference-service] """Create LLMISVC and wait for it to be ready.""" [e2e-llm-inference-service] create_llmisvc(kserve_client, test_case.llm_service) [e2e-llm-inference-service] > wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client, test_case.llm_service, test_case.wait_timeout [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:295: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kin...-repl-cc817675'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}, 900) [e2e-llm-inference-service] kwargs = {}, func_name = 'wait_for_llm_isvc_ready' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:48:25.751412', start_time = 1777063705.7517989 [e2e-llm-inference-service] duration = 644.8421502113342, timestamp_end = '2026-04-24T20:59:10.593949' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] given = {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] ...tor-no-repl-cc817675'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None} [e2e-llm-inference-service] timeout_seconds = 900 [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def wait_for_llm_isvc_ready( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] given: V1alpha1LLMInferenceService, [e2e-llm-inference-service] timeout_seconds: int = 900, [e2e-llm-inference-service] ) -> str: [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] if "status" not in out: [e2e-llm-inference-service] raise AssertionError("No status found in LLM inference service") [e2e-llm-inference-service] [e2e-llm-inference-service] status = out["status"] [e2e-llm-inference-service] if "conditions" not in status: [e2e-llm-inference-service] raise AssertionError("No conditions found in status") [e2e-llm-inference-service] [e2e-llm-inference-service] expected_true_conditions = {"Ready", "WorkloadsReady", "RouterReady"} [e2e-llm-inference-service] got_true_conditions = set() [e2e-llm-inference-service] [e2e-llm-inference-service] conditions = status["conditions"] [e2e-llm-inference-service] [e2e-llm-inference-service] for condition in conditions: [e2e-llm-inference-service] if condition.get("status") == "True": [e2e-llm-inference-service] got_true_conditions.add(condition.get("type")) [e2e-llm-inference-service] [e2e-llm-inference-service] missing_conditions = expected_true_conditions - got_true_conditions [e2e-llm-inference-service] if missing_conditions: [e2e-llm-inference-service] raise AssertionError( [e2e-llm-inference-service] f"Missing true conditions: {missing_conditions}, expected {expected_true_conditions}, got {conditions}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] return True [e2e-llm-inference-service] [e2e-llm-inference-service] > return wait_for(assert_llm_isvc_ready, timeout=timeout_seconds, interval=1.0) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:618: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] assertion_fn = .assert_llm_isvc_ready at 0x7f60bdda6ca0> [e2e-llm-inference-service] timeout = 900, interval = 1.0 [e2e-llm-inference-service] [e2e-llm-inference-service] def wait_for( [e2e-llm-inference-service] assertion_fn: Callable[[], Any], timeout: float = 5.0, interval: float = 0.1 [e2e-llm-inference-service] ) -> Any: [e2e-llm-inference-service] """Wait for the assertion to succeed within timeout.""" [e2e-llm-inference-service] deadline = time.time() + timeout [e2e-llm-inference-service] while True: [e2e-llm-inference-service] try: [e2e-llm-inference-service] > return assertion_fn() [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:628: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] def assert_llm_isvc_ready(): [e2e-llm-inference-service] > out = get_llmisvc( [e2e-llm-inference-service] kserve_client, [e2e-llm-inference-service] given.metadata.name, [e2e-llm-inference-service] given.metadata.namespace, [e2e-llm-inference-service] given.api_version.split("/")[1], [e2e-llm-inference-service] ) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:588: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] args = (, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1') [e2e-llm-inference-service] kwargs = {}, func_name = 'get_llmisvc' [e2e-llm-inference-service] timestamp_start = '2026-04-24T20:59:10.574536', start_time = 1777064350.5747387 [e2e-llm-inference-service] duration = 0.018978595733642578, timestamp_end = '2026-04-24T20:59:10.593721' [e2e-llm-inference-service] [e2e-llm-inference-service] @functools.wraps(func) [e2e-llm-inference-service] def wrapper(*args, **kwargs): [e2e-llm-inference-service] func_name = func.__name__ [e2e-llm-inference-service] [e2e-llm-inference-service] timestamp_start = datetime.now().isoformat() [e2e-llm-inference-service] logger.info( [e2e-llm-inference-service] f"[{func_name}] [{timestamp_start}] start - args={args}, kwargs={kwargs}" [e2e-llm-inference-service] ) [e2e-llm-inference-service] start_time = time.time() [e2e-llm-inference-service] [e2e-llm-inference-service] try: [e2e-llm-inference-service] > result = func(*args, **kwargs) [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/logging.py:40: [e2e-llm-inference-service] _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ [e2e-llm-inference-service] [e2e-llm-inference-service] kserve_client = [e2e-llm-inference-service] name = 'autoscale-stop-keda', namespace = 'kserve-ci-e2e-test' [e2e-llm-inference-service] version = 'v1alpha1' [e2e-llm-inference-service] [e2e-llm-inference-service] @log_execution [e2e-llm-inference-service] def get_llmisvc( [e2e-llm-inference-service] kserve_client: KServeClient, [e2e-llm-inference-service] name, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] version=constants.KSERVE_V1ALPHA1_VERSION, [e2e-llm-inference-service] ): [e2e-llm-inference-service] try: [e2e-llm-inference-service] return kserve_client.api_instance.get_namespaced_custom_object( [e2e-llm-inference-service] constants.KSERVE_GROUP, [e2e-llm-inference-service] version, [e2e-llm-inference-service] namespace, [e2e-llm-inference-service] KSERVE_PLURAL_LLMINFERENCESERVICE, [e2e-llm-inference-service] name, [e2e-llm-inference-service] ) [e2e-llm-inference-service] except client.rest.ApiException as e: [e2e-llm-inference-service] > raise RuntimeError( [e2e-llm-inference-service] f"❌ Exception when calling CustomObjectsApi->" [e2e-llm-inference-service] f"get_namespaced_custom_object for LLMInferenceService: {e}" [e2e-llm-inference-service] ) from e [e2e-llm-inference-service] E RuntimeError: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] E Reason: Internal Server Error [e2e-llm-inference-service] E HTTP response headers: HTTPHeaderDict({'Audit-Id': '74aa7c35-68f9-4b19-91c2-22c071f8bad8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:484: RuntimeError [e2e-llm-inference-service] ------------------------------ Captured log setup ------------------------------ [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig router-managed-autoscale-stop-k-e34ebc49 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig router-managed-autoscale-stop-k-e34ebc49 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig router-managed-autoscale-stop-k-e34ebc49 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig workload-llmd-simulator-no-repl-cc817675 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig workload-llmd-simulator-no-repl-cc817675 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig workload-llmd-simulator-no-repl-cc817675 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1277 Checking LLMInferenceServiceConfig scaling-keda-autoscale-stop-ked-a9b76853 in namespace kserve-ci-e2e-test [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1303 Resource not found, creating LLMInferenceServiceConfig scaling-keda-autoscale-stop-ked-a9b76853 [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1313 ✓ Successfully created LLMInferenceServiceConfig scaling-keda-autoscale-stop-ked-a9b76853 [e2e-llm-inference-service] ------------------------------ Captured log call ------------------------------- [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [test_llm_autoscaling_stop_keda] [2026-04-24T20:48:25.348390] start - args=(), kwargs={'test_case': TestCase(base_refs=['router-managed', 'workload-llmd-simulator-no-replicas', 'scaling-keda'], prompt='KServe is a', service_name='autoscale-stop-keda', endpoint='/v1/completions', max_tokens=100, payload_formatter=None, response_assertion=, wait_timeout=900, response_timeout=60, before_test=[], after_test=[], llm_service={'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-k-e34ebc49'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-cc817675'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}, model_name='facebook/opt-125m')} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:fixtures.py:1328 No HTTP proxy configured for k8s client [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [create_llmisvc] [2026-04-24T20:48:25.361351] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-k-e34ebc49'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-cc817675'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [create_llmisvc] [2026-04-24T20:48:25.751129] end - ✅ in 0.390s [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [wait_for_llm_isvc_ready] [2026-04-24T20:48:25.751412] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-k-e34ebc49'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-cc817675'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}, 900), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:25.751804] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:25.846616] end - ✅ in 0.095s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:26.847041] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:26.947374] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:27.947696] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:28.046922] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:29.047279] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:29.146838] end - ✅ in 0.099s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:30.147131] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:30.247033] end - ✅ in 0.100s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:31.247350] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:31.253500] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:32.253774] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:32.347204] end - ✅ in 0.093s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:33.347513] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:33.354319] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:34.354619] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:34.361035] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:35.361249] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:35.368169] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:36.368446] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:36.446781] end - ✅ in 0.078s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:37.447148] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:37.453824] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:38.454063] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:38.460434] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:39.460700] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:39.467255] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: No conditions found in status [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:40.467616] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:40.474256] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:41.474604] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:41.481817] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:42.482166] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:42.489028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:43.489351] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:43.496616] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:44.496929] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:44.504185] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:45.504525] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:45.546824] end - ✅ in 0.042s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:46.547083] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:46.554691] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:47.555117] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:47.562510] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:48.562845] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:48.570247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:49.570598] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:49.577846] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:50.578198] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:50.585906] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:51.586197] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:51.593060] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:52.593345] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:52.600823] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:53.601198] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:53.608136] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:54.608397] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:54.617372] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:55.617753] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:55.625174] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:56.625474] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:56.632861] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:57.633092] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:57.640053] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:58.640292] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:58.647071] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:48:59.647348] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:48:59.654671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:00.654941] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:00.662048] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:01.662360] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:01.672128] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:02.672388] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:02.680108] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:03.680357] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:03.687484] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:04.687782] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:04.696724] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:05.697136] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:05.704953] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:06.705399] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:06.712100] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:07.712391] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:07.721331] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:08.721597] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:08.728665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:09.728985] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:09.735948] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:10.736206] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:10.743135] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:11.743384] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:11.750633] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:12.750951] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:12.757583] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:13.757851] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:13.765095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:14.765374] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:14.773056] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:15.773365] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:15.780692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:16.781014] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:16.788541] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:17.788805] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:17.796431] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:18.796903] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:18.804459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:19.804789] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:19.812028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:20.812319] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:20.820050] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:21.820532] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:21.828509] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:22.828759] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:22.836407] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:23.836685] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:23.844903] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:24.845140] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:24.852099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:25.852367] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:25.859750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:26.860044] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:26.866947] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:27.867230] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:27.874355] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:28.874793] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:28.883351] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:29.883782] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:29.891080] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:30.891346] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:30.898489] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:31.898855] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:31.906805] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:32.907240] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:32.914692] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:33.915118] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:33.922004] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:34.922349] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:34.931180] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:35.931580] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:35.939443] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:36.939729] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:36.947892] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:37.948290] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:37.955356] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:38.955632] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:38.962917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:39.963210] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:39.979703] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:40.979960] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:40.986802] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:41.987085] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:41.994190] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:42.994538] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:43.001577] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:44.001860] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:44.008773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:45.009126] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:45.016750] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:46.017032] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:46.026282] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:47.026749] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:47.034983] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:48.035271] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:48.043378] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:49.043647] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:49.050785] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:50.051107] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:50.058544] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:51.058774] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:51.066234] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:52.066600] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:52.073545] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:53.073777] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:53.080591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:54.080829] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:54.087669] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:55.087947] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:55.096912] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:56.097211] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:56.104586] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:57.104892] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:57.111742] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:58.112012] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:58.119403] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:49:59.119708] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:49:59.127184] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:00.127502] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:00.134537] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:01.134979] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:01.141985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:02.142362] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:02.149779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:03.150159] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:03.157578] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:04.157977] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:04.165559] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:05.165880] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:05.174531] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:06.174783] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:06.182440] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:07.182728] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:07.191133] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:08.191423] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:08.198771] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:09.199113] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:09.206629] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:10.206987] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:10.213940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:11.214258] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:11.222220] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:12.222697] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:12.231731] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:13.232008] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:13.239518] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:14.239815] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:14.247517] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:15.247960] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:15.256393] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:16.256714] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:16.264433] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:17.264774] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:17.272813] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:18.273231] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:18.280502] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:19.280776] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:19.289055] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:20.289362] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:20.296919] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:21.297250] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:21.304848] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:22.305096] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:22.312390] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:23.312758] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:23.319743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:24.320044] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:24.327779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:25.328043] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:25.335412] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:26.335699] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:26.342409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:27.342720] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:27.350101] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:28.350461] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:28.357379] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:29.357705] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:29.364736] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:30.365056] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:30.372023] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:31.372350] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:31.389762] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:32.390045] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:32.397069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:33.397395] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:33.404983] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:34.405358] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:34.412207] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:35.412554] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:35.421061] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:36.421654] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:36.428760] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:37.429042] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:37.436494] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:38.436794] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:38.445431] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:39.445751] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:39.453078] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:40.453366] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:40.460716] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:41.460987] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:41.472081] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:42.472438] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:42.479481] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:43.479743] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:43.486668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:44.486947] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:44.494172] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:45.494649] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:45.502446] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:46.502823] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:46.510134] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:47.510489] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:47.516984] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:48.517258] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:48.524129] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:49.524422] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:49.530981] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:50.531333] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:50.538030] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:51.538322] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:51.545659] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:52.545954] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:52.554803] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:53.555074] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:53.562521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:54.562815] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:54.570628] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:55.570902] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:55.579180] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:56.579680] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:56.588906] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:57.589146] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:57.595783] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:58.596044] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:58.602932] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:50:59.603359] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:50:59.646736] end - ✅ in 0.043s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:00.647055] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:00.653940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:01.654235] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:01.661546] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:02.661956] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:02.669026] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:03.669263] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:03.676102] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:04.676371] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:04.683163] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:05.683532] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:05.689992] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:06.690260] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:06.696768] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:07.697060] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:07.703378] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:08.703664] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:08.710776] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:09.711106] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:09.746794] end - ✅ in 0.035s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:10.747109] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:10.754204] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:11.754539] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:11.761743] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:12.762100] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:12.768930] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:13.769192] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:13.776595] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:14.776980] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:14.783894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:15.784162] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:15.791419] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:16.791756] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:16.798680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:17.798960] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:17.806104] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:18.806507] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:18.813672] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:19.813946] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:19.820788] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:20.821081] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:20.828197] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:21.828562] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:21.836991] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:22.837320] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:22.844784] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:23.845068] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:23.851877] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:24.852167] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:24.859395] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:25.859731] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:25.866478] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:26.866795] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:26.873433] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:27.873755] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:27.880558] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:28.880881] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:28.887412] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:29.887760] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:29.947174] end - ✅ in 0.059s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:30.947532] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:30.955142] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:31.955488] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:31.962278] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:32.962705] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:32.969927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:33.970331] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:33.976962] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:34.977284] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:34.984677] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:35.984925] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:35.991918] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:36.992264] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:36.998807] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:37.999075] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:38.007705] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:39.008027] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:39.020986] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:40.021315] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:40.030869] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:41.031120] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:41.037690] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:42.038021] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:42.045575] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:43.045844] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:43.052346] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:44.052646] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:44.060070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:45.060331] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:45.067213] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:46.067559] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:46.074685] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:47.074943] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:47.081368] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:48.081638] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:48.088518] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:49.088794] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:49.106209] end - ✅ in 0.017s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:50.106498] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:50.113415] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:51.113747] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:51.120536] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:52.120800] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:52.128250] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:53.128552] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:53.135178] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:54.135489] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:54.142816] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:55.143063] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:55.149747] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:56.150042] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:56.156968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:57.157241] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:57.164162] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:58.164439] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:58.171476] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:51:59.171770] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:51:59.178139] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:00.178437] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:00.185376] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:01.185700] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:01.192411] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:02.192663] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:02.200038] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:03.200341] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:03.208328] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:04.208594] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:04.215421] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:05.215677] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:05.222638] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:06.222884] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:06.229459] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:07.229821] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:07.236992] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:08.237277] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:08.246610] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:09.246887] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:09.253985] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:10.254353] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:10.261617] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:11.261932] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:11.269048] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:12.269373] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:12.276192] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:13.276557] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:13.283796] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:14.284120] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:14.291245] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:15.291596] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:15.299606] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:16.299883] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:16.306638] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:17.306938] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:17.313949] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:18.314229] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:18.321272] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:19.321770] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:19.328740] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:20.329006] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:20.336366] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:21.336684] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:21.343922] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:22.344185] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:22.351100] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:23.351412] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:23.359116] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:24.359479] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:24.366333] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:25.366636] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:25.373901] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:26.374220] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:26.381348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:27.381625] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:27.388597] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:28.388977] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:28.396382] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:29.396636] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:29.403222] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:30.403608] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:30.410521] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:31.410749] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:31.418023] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:32.418381] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:32.425192] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:33.425714] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:33.432671] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:34.432957] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:34.439815] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:35.440086] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:35.446752] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:36.447034] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:36.454165] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:37.454502] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:37.463484] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:38.463815] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:38.470208] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:39.470509] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:39.477920] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:40.478231] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:40.485015] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:41.485389] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:41.492424] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:42.492728] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:42.499620] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:43.499985] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:43.506597] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:44.506864] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:44.513800] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:45.514093] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:45.524263] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:46.524584] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:46.531623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:47.531925] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:47.538872] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:48.539173] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:48.545982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:49.546272] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:49.553675] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:50.554046] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:50.561069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:51.561431] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:51.567863] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:52.568134] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:52.575155] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:53.575500] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:53.582177] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:54.582534] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:54.589853] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:55.590117] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:55.597121] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:56.597371] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:56.603928] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:57.604215] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:57.611549] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:58.611838] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:58.618540] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:52:59.618797] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:52:59.625471] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:00.625745] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:00.633189] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:01.633507] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:01.640409] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:02.640701] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:02.647838] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:03.648190] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:03.655378] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:04.655672] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:04.663651] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:05.664148] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:05.670913] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:06.671174] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:06.679902] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:07.680209] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:07.691130] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:08.691353] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:08.700976] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:09.701248] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:09.708180] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:10.708484] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:10.715717] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:11.716031] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:11.723218] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:12.723570] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:12.730546] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:13.730823] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:13.738066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:14.738526] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:14.745230] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:15.745527] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:15.752637] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:16.752969] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:16.759896] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:17.760287] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:17.767573] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:18.767936] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:18.775262] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:19.775572] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:19.782940] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:20.783200] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:20.790536] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:21.790801] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:21.797732] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:22.798017] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:22.804934] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:23.805243] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:23.811770] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:24.812007] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:24.818821] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:25.819127] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:25.825936] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:26.826206] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:26.833546] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:27.833860] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:27.840603] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:28.840945] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:28.848066] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:29.848525] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:29.856148] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:30.856484] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:30.863637] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:31.863924] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:31.875154] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:32.875507] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:32.882874] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:33.883127] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:33.890088] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:34.890370] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:34.898027] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:35.898263] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:35.904943] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:36.905189] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:36.911949] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:37.912211] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:37.919091] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:38.919343] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:38.928110] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:39.928489] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:39.935850] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:40.936110] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:40.943011] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:41.943366] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:41.951428] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:42.951729] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:42.958946] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:43.959210] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:43.966269] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:44.966574] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:44.977020] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:45.977352] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:45.984417] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:46.984750] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:46.992069] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:47.992453] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:47.999763] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:49.000108] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:49.007773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:50.008151] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:50.019245] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:51.019522] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:51.026398] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:52.026740] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:52.033880] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:53.034184] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:53.041748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:54.042005] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:54.049128] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:55.049462] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:55.056502] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:56.056822] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:56.064046] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:57.064339] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:57.072040] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:58.072352] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:58.078894] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:53:59.079161] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:53:59.085704] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:00.085971] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:00.092971] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:01.093241] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:01.100266] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:02.100593] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:02.107482] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:03.107754] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:03.116963] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:04.117233] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:04.124679] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:05.124958] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:05.131808] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:06.132126] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:06.139405] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:07.139686] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:07.146595] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:08.146878] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:08.154247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:09.154587] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:09.162017] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:10.162259] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:10.170036] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:11.170430] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:11.177721] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:12.177980] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:12.184981] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:13.185260] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:13.192381] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:14.192723] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:14.200354] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:15.200626] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:15.208890] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:16.209156] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:16.217469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:17.217847] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:17.225536] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:18.225823] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:18.232225] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:19.232486] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:19.239955] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:20.240291] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:20.247548] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:21.247834] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:21.256245] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:22.256542] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:22.275693] end - ✅ in 0.019s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:23.275990] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:23.282754] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:24.283043] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:24.289723] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:25.290016] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:25.297110] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:26.297402] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:26.303838] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:27.304113] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:27.310788] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:28.311134] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:28.317927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:29.318174] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:29.325566] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:30.325830] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:30.332214] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:31.332508] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:31.339131] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:32.339461] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:32.345918] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:33.346175] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:33.353099] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:34.353350] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:34.359768] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:35.360029] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:35.367790] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:36.368051] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:36.375343] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:37.375588] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:37.384763] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:38.385006] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:38.392109] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:39.392352] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:39.399914] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:40.400197] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:40.408102] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:41.408346] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:41.415375] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:42.415713] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:42.422794] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:43.423071] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:43.430522] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:44.430798] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:44.438058] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:45.438364] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:45.445966] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:46.446247] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:46.453445] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:47.453804] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:47.461363] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:48.461661] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:48.468915] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:49.469204] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:49.475839] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:50.476186] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:50.484095] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:51.484478] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:51.491962] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:52.492392] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:52.499902] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:53.500236] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:53.507661] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:54.507966] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:54.515486] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:55.515753] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:55.522891] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:56.523161] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:56.530331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:57.530601] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:57.537902] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:58.538192] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:58.545293] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:54:59.545529] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:54:59.552121] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:00.552444] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:00.559883] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:01.560213] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:01.567590] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:02.567879] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:02.578615] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:03.578853] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:03.585693] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:04.586032] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:04.593591] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:05.593925] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:05.601147] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:06.601469] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:06.609397] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:07.609815] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:07.617441] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:08.617729] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:08.624767] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:09.625035] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:09.633475] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:10.633835] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:10.641711] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:11.641980] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:11.649096] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:12.649362] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:12.657563] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:13.657948] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:13.665447] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:14.665834] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:14.674852] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:15.675228] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:15.682523] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:16.682824] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:16.689841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:17.690102] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:17.697248] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:18.697546] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:18.704980] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:19.705339] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:19.714015] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:20.714293] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:20.722267] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:21.722602] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:21.729623] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:22.729885] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:22.737088] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:23.737547] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:23.744684] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:24.744912] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:24.752768] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:25.753169] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:25.760584] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:26.760942] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:26.768245] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:27.768602] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:27.776921] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:28.777435] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:28.784860] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:29.785123] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:29.792849] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:30.793161] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:30.800997] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:31.801253] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:31.808668] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:32.809008] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:32.816909] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:33.817259] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:33.824285] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:34.824585] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:34.831711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:35.832007] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:35.839536] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:36.839771] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:36.847708] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:37.848360] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:37.855190] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:38.855579] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:38.863041] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:39.863351] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:39.871255] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:40.871698] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:40.879074] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:41.879533] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:41.886823] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:42.887055] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:42.894560] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:43.894864] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:43.902696] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:44.903114] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:44.913321] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:45.913649] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:45.921150] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:46.921380] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:46.929236] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:47.929736] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:47.936651] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:48.936954] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:48.944516] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:49.944869] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:49.952095] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:50.952480] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:50.960070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:51.960504] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:51.967933] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:52.968336] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:52.975650] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:53.975916] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:53.984632] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:54.984990] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:54.994891] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:55.995249] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:56.002455] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:57.002753] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:57.012884] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:58.013147] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:58.021547] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:55:59.021874] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:55:59.029331] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:00.029685] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:00.037911] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:01.038383] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:01.046039] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:02.046597] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:02.053809] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:03.054155] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:03.062209] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:04.062780] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:04.070780] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:05.071148] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:05.078351] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:06.078626] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:06.085882] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:07.086353] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:07.093681] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:08.093982] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:08.102425] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:09.102790] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:09.109801] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:10.110075] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:10.117418] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:11.117767] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:11.124711] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:12.124995] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:12.131890] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:13.132170] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:13.139680] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:14.140005] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:14.148539] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:15.148864] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:15.156119] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:16.156458] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:16.165040] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:17.165370] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:17.173153] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:18.173571] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:18.180632] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:19.180916] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:19.188813] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:20.189215] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:20.198227] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:21.198553] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:21.205894] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:22.206244] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:22.213779] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:23.214060] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:23.221489] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:24.221821] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:24.229430] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:25.229828] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:25.237711] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:26.238043] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:26.245607] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:27.246000] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:27.253132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:28.253423] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:28.261077] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:29.261361] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:29.268514] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:30.268870] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:30.276135] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:31.276424] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:31.283441] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:32.283771] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:32.290773] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:33.291133] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:33.298236] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:34.298572] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:34.305257] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:35.305680] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:35.314095] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:36.314592] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:36.322744] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:37.322995] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:37.330247] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:38.330717] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:38.338232] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:39.338650] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:39.345995] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:40.346275] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:40.353257] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:41.353642] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:41.360734] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:42.361124] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:42.368748] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:43.369044] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:43.376709] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:44.377011] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:44.384186] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:45.384488] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:45.393101] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:46.393506] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:46.400833] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:47.401128] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:47.409123] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:48.409560] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:48.416487] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:49.416767] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:49.424957] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:50.425183] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:50.432392] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:51.432651] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:51.440717] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:52.440939] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:52.448195] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:53.448545] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:53.455930] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:54.456330] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:54.464010] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:55.464346] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:55.474590] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:56.474906] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:56.482035] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:57.482350] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:57.489620] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:58.489897] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:58.496946] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:56:59.497342] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:56:59.504942] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:00.505356] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:00.513068] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:01.513495] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:01.520845] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:02.521274] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:02.531034] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:03.531528] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:03.539372] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:04.539739] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:04.548147] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:05.548705] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:05.556205] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:06.556560] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:06.564258] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:07.564599] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:07.571348] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:08.571693] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:08.581589] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:09.581914] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:09.588927] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:10.589201] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:10.596767] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:11.597163] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:11.604687] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:12.605131] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:12.612826] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:13.613266] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:13.621070] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:14.621353] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:14.631550] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:15.631844] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:15.639368] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:16.639615] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:16.646620] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:17.646955] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:17.654434] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:18.654770] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:18.661982] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:19.662323] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:19.669264] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:20.669643] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:20.677267] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:21.677617] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:21.684386] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:22.684632] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:22.691694] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:23.692027] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:23.699003] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:24.699291] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:24.706829] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:25.707554] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:25.715781] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:26.716070] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:26.722812] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:27.723070] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:27.730579] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:28.730850] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:28.738025] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:29.738376] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:29.745619] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:30.745935] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:30.753665] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:31.754131] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:31.761274] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:32.761736] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:32.769118] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:33.769486] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:33.777226] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:34.777711] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:34.785469] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:35.785867] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:35.795774] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:36.796123] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:36.804164] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:37.804561] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:37.811342] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:38.811671] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:38.818228] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:39.818594] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:39.825459] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:40.825814] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:40.832367] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:41.832721] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:41.840074] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:42.840346] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:42.847622] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:43.848044] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:43.854960] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:44.855237] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:44.862452] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:45.862816] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:45.869917] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:46.870323] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:46.877697] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:47.878015] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:47.885185] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:48.885452] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:48.892237] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:49.892588] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:49.900006] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:50.900356] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:50.907445] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:51.907732] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:51.914993] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:52.915249] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:52.922009] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:53.922278] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:53.929584] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:54.929889] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:54.940849] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:55.941155] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:55.957581] end - ✅ in 0.016s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:56.957876] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:56.971171] end - ✅ in 0.013s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:57.971530] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:57.978772] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:58.979092] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:58.986339] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:57:59.986598] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:57:59.994177] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:00.994601] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:01.001258] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:02.001602] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:02.008873] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:03.009178] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:03.017204] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:04.017547] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:04.026734] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:05.027021] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:05.034646] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:06.034996] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:06.042888] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:07.043250] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:07.050830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:08.051228] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:08.058999] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:09.059353] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:09.067088] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:10.067464] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:10.075049] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:11.075447] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:11.082841] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:12.083109] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:12.090141] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:13.090439] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:13.098086] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:14.098610] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:14.107226] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:15.107605] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:15.115456] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:16.115752] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:16.123246] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:17.123589] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:17.131380] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:18.131694] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:18.139442] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:19.139874] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:19.148475] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:20.148897] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:20.157136] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:21.157565] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:21.165132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:22.165507] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:22.172830] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:23.173156] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:23.181349] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:24.181662] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:24.188967] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:25.189215] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:25.196272] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:26.196531] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:26.204175] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:27.204546] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:27.212981] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:28.213551] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:28.220422] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:29.220683] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:29.228086] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:30.228494] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:30.236963] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:31.237399] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:31.244474] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:32.244768] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:32.253211] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:33.253637] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:33.261159] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:34.261559] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:34.269968] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:35.270354] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:35.279016] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:36.279390] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:36.288163] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:37.288481] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:37.295959] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:38.296264] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:38.303824] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:39.304131] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:39.312212] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:40.312476] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:40.320028] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:41.320335] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:41.327872] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:42.328374] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:42.335968] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:43.336390] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:43.344349] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:44.344665] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:44.353083] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:45.353546] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:45.361144] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:46.361534] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:46.369283] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:47.369734] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:47.376831] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:48.377142] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:48.384766] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:49.385080] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:49.393967] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:50.394364] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:50.402109] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:51.402490] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:51.410280] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:52.410820] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:52.419454] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:53.419918] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:53.431544] end - ✅ in 0.011s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:54.431929] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:54.439132] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:55.439458] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:55.446952] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:56.447191] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:56.456667] end - ✅ in 0.009s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:57.457028] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:57.464084] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:58.464458] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:58.471759] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:58:59.472082] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:58:59.479456] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:00.479783] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:00.487822] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:01.488140] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:01.495885] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:02.496181] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:02.503731] end - ✅ in 0.007s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:03.504012] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:03.516384] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:04.516658] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:04.525031] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:05.525319] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:05.531823] end - ✅ in 0.006s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:06.532135] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:06.544733] end - ✅ in 0.012s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:07.545010] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:07.555344] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:08.555693] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:08.565891] end - ✅ in 0.010s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:09.566261] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:43 [get_llmisvc] [2026-04-24T20:59:09.574201] end - ✅ in 0.008s [e2e-llm-inference-service] INFO e2e.llmisvc.test_llm_inference_service:test_llm_inference_service.py:632 Waiting: Missing true conditions: {'Ready', 'WorkloadsReady', 'RouterReady'}, expected {'Ready', 'WorkloadsReady', 'RouterReady'}, got [{'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'severity': 'Info', 'status': 'False', 'type': 'MainWorkloadReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'severity': 'Info', 'status': 'True', 'type': 'PresetsCombined'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'Ready'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'status': 'Unknown', 'type': 'RouterReady'}, {'lastTransitionTime': '2026-04-24T20:48:39Z', 'message': 'failed to reconcile main workload scaling: failed to reconcile main VA: failed to get v1alpha1.VariantAutoscaling kserve-ci-e2e-test/autoscale-stop-keda-kserve-va: no matches for kind "VariantAutoscaling" in version "llmd.ai/v1alpha1"', 'reason': 'ScalingCRDNotFound', 'status': 'False', 'type': 'WorkloadsReady'}] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [get_llmisvc] [2026-04-24T20:59:10.574536] start - args=(, 'autoscale-stop-keda', 'kserve-ci-e2e-test', 'v1alpha1'), kwargs={} [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [get_llmisvc] [2026-04-24T20:59:10.593721] end - ❌ 0.019s: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '74aa7c35-68f9-4b19-91c2-22c071f8bad8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [wait_for_llm_isvc_ready] [2026-04-24T20:59:10.593949] end - ❌ 644.842s: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '74aa7c35-68f9-4b19-91c2-22c071f8bad8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] INFO e2e.llmisvc.logging:logging.py:34 [delete_llmisvc] [2026-04-24T20:59:10.594081] start - args=(, {'api_version': 'serving.kserve.io/v1alpha1', [e2e-llm-inference-service] 'kind': 'LLMInferenceService', [e2e-llm-inference-service] 'metadata': {'annotations': None, [e2e-llm-inference-service] 'creation_timestamp': None, [e2e-llm-inference-service] 'deletion_grace_period_seconds': None, [e2e-llm-inference-service] 'deletion_timestamp': None, [e2e-llm-inference-service] 'finalizers': None, [e2e-llm-inference-service] 'generate_name': None, [e2e-llm-inference-service] 'generation': None, [e2e-llm-inference-service] 'labels': None, [e2e-llm-inference-service] 'managed_fields': None, [e2e-llm-inference-service] 'name': 'autoscale-stop-keda', [e2e-llm-inference-service] 'namespace': 'kserve-ci-e2e-test', [e2e-llm-inference-service] 'owner_references': None, [e2e-llm-inference-service] 'resource_version': None, [e2e-llm-inference-service] 'self_link': None, [e2e-llm-inference-service] 'uid': None}, [e2e-llm-inference-service] 'spec': {'baseRefs': [{'name': 'router-managed-autoscale-stop-k-e34ebc49'}, [e2e-llm-inference-service] {'name': 'workload-llmd-simulator-no-repl-cc817675'}, [e2e-llm-inference-service] {'name': 'scaling-keda-autoscale-stop-ked-a9b76853'}]}, [e2e-llm-inference-service] 'status': None}), kwargs={} [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [delete_llmisvc] [2026-04-24T20:59:10.613774] end - ❌ 0.019s: ❌ Exception when calling CustomObjectsApi->delete_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '2d8c27f5-a6b1-412c-b5f2-e7e83b419e58', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] WARNING e2e.llmisvc.test_llm_autoscaling:test_llm_autoscaling.py:306 Failed to cleanup service: ❌ Exception when calling CustomObjectsApi->delete_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '2d8c27f5-a6b1-412c-b5f2-e7e83b419e58', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] [e2e-llm-inference-service] [e2e-llm-inference-service] ERROR e2e.llmisvc.logging:logging.py:48 [test_llm_autoscaling_stop_keda] [2026-04-24T20:59:10.614123] end - ❌ 645.265s: ❌ Exception when calling CustomObjectsApi->get_namespaced_custom_object for LLMInferenceService: (500) [e2e-llm-inference-service] Reason: Internal Server Error [e2e-llm-inference-service] HTTP response headers: HTTPHeaderDict({'Audit-Id': '74aa7c35-68f9-4b19-91c2-22c071f8bad8', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload', 'X-Kubernetes-Pf-Flowschema-Uid': '20fe89dc-62d1-40a8-bde3-b4d3893bb017', 'X-Kubernetes-Pf-Prioritylevel-Uid': '3f197158-a38d-4553-8c16-85addf5fb0ab', 'Date': 'Fri, 24 Apr 2026 20:59:10 GMT', 'Content-Length': '264'}) [e2e-llm-inference-service] HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"conversion webhook for serving.kserve.io/v1alpha2, Kind=LLMInferenceService failed: Post \"https://llmisvc-webhook-server-service.kserve.svc:443/convert?timeout=30s\": EOF","code":500} [e2e-llm-inference-service] =============================== warnings summary =============================== [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:151 [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:151 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:151: PytestUnknownMarkWarning: Unknown pytest.mark.custom_gateway - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:200 [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:200 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:200: PytestUnknownMarkWarning: Unknown pytest.mark.custom_gateway - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:252 [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:252 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:252: PytestUnknownMarkWarning: Unknown pytest.mark.custom_gateway - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.custom_gateway, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:299 [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:299 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:299: PytestUnknownMarkWarning: Unknown pytest.mark.cluster_gpu - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.cluster_gpu, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:316 [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:316 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_inference_service.py:316: PytestUnknownMarkWarning: Unknown pytest.mark.no_scheduler - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.no_scheduler, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:561 [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:561 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_autoscaling.py:561: PytestUnknownMarkWarning: Unknown pytest.mark.cluster_gpu - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.cluster_gpu, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:609 [e2e-llm-inference-service] llmisvc/test_llm_autoscaling.py:609 [e2e-llm-inference-service] /workspace/source/test/e2e/llmisvc/test_llm_autoscaling.py:609: PytestUnknownMarkWarning: Unknown pytest.mark.cluster_gpu - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/how-to/mark.html [e2e-llm-inference-service] pytest.mark.cluster_gpu, [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] /workspace/source/python/kserve/.venv/lib64/python3.11/site-packages/pytest_asyncio/plugin.py:761: DeprecationWarning: The event_loop fixture provided by pytest-asyncio has been redefined in [e2e-llm-inference-service] /workspace/source/test/e2e/conftest.py:43 [e2e-llm-inference-service] Replacing the event_loop fixture with a custom implementation is deprecated [e2e-llm-inference-service] and will lead to errors in the future. [e2e-llm-inference-service] If you want to request an asyncio event loop with a scope other than function [e2e-llm-inference-service] scope, use the "scope" argument to the asyncio mark when marking the tests. [e2e-llm-inference-service] If you want to return different types of event loops, use the event_loop_policy [e2e-llm-inference-service] fixture. [e2e-llm-inference-service] [e2e-llm-inference-service] warnings.warn( [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-custom-route-timeout-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-no-scheduler-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_multi_node-router-managed-workload-simulated-dp-ep-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-inline-config-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-configmap-ref-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-replicas-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-precise-prefix-cache-inline-config-workload-llmd-simulator-kvcache] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service_stop.py::test_llm_stop_feature[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service_stop.py:40: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-gateway-ref-router-with-managed-route-model-fb-opt-125m-workload-llmd-simulator] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-custom-route-timeout-scheduler-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-scheduler-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] llmisvc/test_llm_inference_service.py:124: PytestWarning: The test is marked with '@pytest.mark.asyncio' but it is not an async function. Please remove the asyncio mark. If the test is not marked explicitly, check for global marks applied via 'pytestmark'. [e2e-llm-inference-service] @pytest.mark.llminferenceservice [e2e-llm-inference-service] [e2e-llm-inference-service] -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html [e2e-llm-inference-service] =========================== short test summary info ============================ [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_hpa_deployment[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] FAILED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-with-refs-pd-scheduler-managed-workload-pd-cpu-model-fb-opt-125m] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_keda_deployment[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] FAILED llmisvc/test_llm_inference_service.py::test_llm_inference_service[cluster_cpu-cluster_single_node-router-managed-scheduler-with-configmap-ref-workload-llmd-simulator] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_hpa_lws[cluster_cpu-cluster_multi_node-router-managed-workload-llmd-simulator-lws-scaling-hpa] [e2e-llm-inference-service] FAILED llmisvc/test_llm_inference_service_stop.py::test_llm_stop_feature[cluster_cpu-cluster_single_node-router-managed-workload-single-cpu-model-fb-opt-125m] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_keda_lws[cluster_cpu-cluster_multi_node-router-managed-workload-llmd-simulator-lws-scaling-keda] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_cleanup_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_update_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_cleanup_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_stop_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] FAILED llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_stop_keda[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-keda] [e2e-llm-inference-service] ERROR llmisvc/test_llm_autoscaling.py::test_llm_autoscaling_update_hpa[cluster_cpu-cluster_single_node-router-managed-workload-llmd-simulator-no-replicas-scaling-hpa] [e2e-llm-inference-service] !!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 13 failures !!!!!!!!!!!!!!!!!!!!!!!!!! [e2e-llm-inference-service] !!!!!!!!!!!! xdist.dsession.Interrupted: stopping after 10 failures !!!!!!!!!!!! [e2e-llm-inference-service] = 12 failed, 19 passed, 3 skipped, 29 warnings, 1 error in 5994.12s (1:39:54) == [must-gather] [must-gather ] OUT 2026-04-24T20:59:12.871360101Z Using must-gather plug-in image: quay.io/modh/must-gather:rhoai-2.24 [must-gather] When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: [must-gather] ClusterID: 94a39896-ddd8-46ae-a5bc-c59d51de2364 [must-gather] ClientVersion: 4.21.5 [must-gather] ClusterVersion: Stable at "4.20.19" [must-gather] ClusterOperators: [must-gather] clusteroperator/authentication is missing [must-gather] clusteroperator/cloud-credential is missing [must-gather] clusteroperator/cluster-autoscaler is missing [must-gather] clusteroperator/config-operator is missing [must-gather] clusteroperator/etcd is missing [must-gather] clusteroperator/machine-api is missing [must-gather] clusteroperator/machine-approver is missing [must-gather] clusteroperator/machine-config is missing [must-gather] clusteroperator/marketplace is missing [must-gather] [must-gather] [must-gather] [must-gather ] OUT 2026-04-24T20:59:12.941586615Z namespace/openshift-must-gather-b4cn7 created [must-gather] [must-gather ] OUT 2026-04-24T20:59:12.946234553Z clusterrolebinding.rbac.authorization.k8s.io/must-gather-tq2gt created [must-gather] [must-gather ] OUT 2026-04-24T20:59:12.97280261Z pod for plug-in image quay.io/modh/must-gather:rhoai-2.24 created [must-gather] [must-gather-wskvv] OUT 2026-04-24T21:01:32.973828292Z gather did not start: resource name may not be empty [must-gather] [must-gather ] OUT 2026-04-24T21:01:32.979904477Z namespace/openshift-must-gather-b4cn7 deleted [must-gather] [must-gather] [must-gather] Reprinting Cluster State: [must-gather] When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: [must-gather] ClusterID: 94a39896-ddd8-46ae-a5bc-c59d51de2364 [must-gather] ClientVersion: 4.21.5 [must-gather] ClusterVersion: Stable at "4.20.19" [must-gather] ClusterOperators: [must-gather] clusteroperator/dns is not available (DNS "default" is unavailable.) because DNS default is degraded [must-gather] clusteroperator/image-registry is not available (Available: The deployment does not have available replicas [must-gather] NodeCADaemonAvailable: The daemon set node-ca does not have available replicas [must-gather] ImagePrunerAvailable: Pruner CronJob has been created) because [must-gather] clusteroperator/ingress is not available (The "default" ingress controller reports Available=False: IngressControllerUnavailable: One or more status conditions indicate unavailable: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.)) because The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: DeploymentAvailable=False (DeploymentUnavailable: The deployment has Available status condition set to False (reason: MinimumReplicasUnavailable) with message: Deployment does not have minimum availability.), DeploymentReplicasMinAvailable=False (DeploymentMinimumReplicasNotMet: 0/1 of replicas are available, max unavailable is 0: Some pods are not scheduled: Pod "router-default-85965c6c4b-g7g7x" cannot be scheduled: 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. Make sure you have sufficient worker nodes.). [must-gather] clusteroperator/network is progressing: DaemonSet "/openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes) [must-gather] DaemonSet "/openshift-ovn-kubernetes/ovnkube-node" is not available (awaiting 1 nodes) [must-gather] DaemonSet "/openshift-network-operator/iptables-alerter" is not available (awaiting 1 nodes) [must-gather] DaemonSet "/openshift-multus/multus" is not available (awaiting 1 nodes) [must-gather] DaemonSet "/openshift-multus/multus-additional-cni-plugins" is not available (awaiting 1 nodes) [must-gather] Deployment "/openshift-network-console/networking-console-plugin" is not available (awaiting 1 nodes) [must-gather] clusteroperator/node-tuning is not available (DaemonSet "tuned" has no available Pod(s)) because DaemonSet "tuned" available [must-gather] clusteroperator/storage is not available (AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service) because AWSEBSCSIDriverOperatorCRDegraded: All is well [must-gather] clusteroperator/authentication is missing [must-gather] clusteroperator/cloud-credential is missing [must-gather] clusteroperator/cluster-autoscaler is missing [must-gather] clusteroperator/config-operator is missing [must-gather] clusteroperator/etcd is missing [must-gather] clusteroperator/machine-api is missing [must-gather] clusteroperator/machine-approver is missing [must-gather] clusteroperator/machine-config is missing [must-gather] clusteroperator/marketplace is missing [must-gather] error: gather did not start for pod must-gather-wskvv: resource name may not be empty [git-push-artifacts] WORK_DIR: /workspace/odh-ci-artifacts [git-push-artifacts] REPO_PATH: opendatahub-io/odh-build-metadata [git-push-artifacts] REPO_BRANCH: ci-artifacts [git-push-artifacts] SPARSE_FILE_PATH: test-artifacts/docs [git-push-artifacts] SOURCE_PATH: /workspace/artifacts-dir [git-push-artifacts] DEST_PATH: test-artifacts/kserve-group-test-9rr7z [git-push-artifacts] ALWAYS_PASS: false [git-push-artifacts] configuring gh token [git-push-artifacts] taking github token from Konflux bot [git-push-artifacts] Initialized empty Git repository in /workspace/odh-ci-artifacts/.git/ [git-push-artifacts] Using partial fetch with sparse checkout for: test-artifacts/docs [git-push-artifacts] From https://github.com/opendatahub-io/odh-build-metadata [git-push-artifacts] * branch ci-artifacts -> FETCH_HEAD [git-push-artifacts] * [new branch] ci-artifacts -> origin/ci-artifacts [git-push-artifacts] Already on 'ci-artifacts' [git-push-artifacts] branch 'ci-artifacts' set up to track 'origin/ci-artifacts'. [git-push-artifacts] TASK_NAME=kserve-group-test-9rr7z-e2e-llm-inference-service [git-push-artifacts] PIPELINERUN_NAME=kserve-group-test-9rr7z [git-push-artifacts] From https://github.com/opendatahub-io/odh-build-metadata [git-push-artifacts] * branch ci-artifacts -> FETCH_HEAD [git-push-artifacts] Already up to date. [git-push-artifacts] -rw-r--r--. 1 root 1001540000 3120 Apr 24 21:01 /workspace/odh-ci-artifacts/test-artifacts/kserve-group-test-9rr7z/e2e-llm-inference-service.tar.gz [git-push-artifacts] [ci-artifacts 849007c] Updating CI Artifacts in e2e-llm-inference-service [git-push-artifacts] 1 file changed, 0 insertions(+), 0 deletions(-) [git-push-artifacts] create mode 100644 test-artifacts/kserve-group-test-9rr7z/e2e-llm-inference-service.tar.gz [git-push-artifacts] From https://github.com/opendatahub-io/odh-build-metadata [git-push-artifacts] * branch ci-artifacts -> FETCH_HEAD [git-push-artifacts] Already up to date. [git-push-artifacts] To https://github.com/opendatahub-io/odh-build-metadata.git [git-push-artifacts] fb03d75..849007c ci-artifacts -> ci-artifacts [fail-if-needed] Failing pipeline because deploy-and-e2e step failed container step-fail-if-needed has failed : [{"key":"StartedAt","value":"2026-04-24T21:01:36.794Z","type":3}]